Threat actors can publish Skills – the name given to third-party Alexa applications – under any arbitrary developer/company name and also make backend code changes after approval to coax users into revealing unwanted information, according to new research presented at the Network and Distributed System Security Symposium (NDSS) conference. Christopher Lentzsch and Martin Degeling, from Horst Görtz Institute for IT Security at Ruhr-Universität Bochum, and Sheel Jayesh Shah, Benjamin Andow, Anupam Das, and William Enck, from North Carolina State University – analyzed 90,194 Skills available in seven countries and found safety gaps that allow for malicious actions, abuse, and inadequate data usage disclosure. The researchers were able to publish Skills using the names of well-known companies, which makes trust-based attacks like phishing easier. By failing to check for changes in Skill server logic, Amazon makes it possible for a malicious developer to alter the response to an existing trigger phrase or to activate a previously approved dormant trigger phase. A Skill manipulated in this way could, for example, start asking for a credit card after passing Amazon’s initial review.