Threat actors can publish Skills – the name given to third-party Alexa applications – under any arbitrary developer/company name and also make backend code changes after approval to coax users into revealing unwanted information, according to new research presented at the Network and Distributed System Security Symposium (NDSS) conference. Christopher Lentzsch and Martin Degeling, from Horst Görtz Institute for IT Security at Ruhr-Universität Bochum, and Sheel Jayesh Shah, Benjamin Andow, Anupam Das, and William Enck, from North Carolina State University – analyzed 90,194 Skills available in seven countries and found safety gaps that allow for malicious actions, abuse, and inadequate data usage disclosure. The researchers were able to publish Skills using the names of well-known companies, which makes trust-based attacks like phishing easier. By failing to check for changes in Skill server logic, Amazon makes it possible for a malicious developer to alter the response to an existing trigger phrase or to activate a previously approved dormant trigger phase. A Skill manipulated in this way could, for example, start asking for a credit card after passing Amazon’s initial review.
<p>Hacking a virtual assistant in millions of people’s homes is what malicious actors dream of doing. Much like a Trojan, Alexa Skills can be published under a fake identity, which could encourage the user to trust it fully, leaving them vulnerable to attack. Cybercriminals could potentially request credit card details or private data such as demographics and habits of the people in the house. </p> <p> </p> <p>Unfortunately it isn’t always obvious that this ‘Skills squatting’ is occurring, so it is best to only enable Alexa functions if you are confident with what they are doing. Meanwhile, Amazon should be looking into updating the feature so users are better protected.</p>