Reports have surfaced regarding attack vectors that use vulnerabilities in Cortana for various nefarious purposes, such as retrieving confidential information, logging into a locked device and even code execution from the lock screen. Although initially submitted to Microsoft a couple of months ago, CVE-2018-8140 was only fixed during this Patch Tuesday. Please see below for comments from several cybersecurity experts.
Larry Trowell, Associate Principal Consultant at Synopsys:
“We’re seeing yet another reminder of the potential security and privacy risks of our technology-driven and always-connected world. This instance reminds me of the previous Siri hack allowing attackers to unlock an iPhone by activating a task on the device. In the case of Cortana, the CVE allows users to access the search feature of the operating system. The smart assistant is pretty much just the vector by which to access the search feature. These assistants are given the same (and in some cases more) access to the system as users. The use of this feature by users and attackers while the system is locked hasn’t been completely thought through, as we can easily see from the Cortana situation.
While a fix for the vulnerability has been issued, there are still other areas in which these assistants can be used to carry out an attack. For example, I see no reason why the dolphin attacks (that came to light last year) triggering cell phone smart assistants to call numbers and launch apps couldn’t be modified to attack a distracted user. The software is neat, interesting, and fun to use. It also opens up a lot of areas that possibly haven’t been thought through properly.
In general, anytime you increase the attack surface, there is risk involved. In this case, the user doesn’t have to disable the smart assistant completely, but disabling it for the lock screen is advisable. Also educate yourself on how the assistant can be used.
It’s widely understood that applications such as smart assistants power our lives, or at least make our lives more convenient. It’s less obvious that much of the software and systems in use today aren’t designed and developed with the appropriate level of security in mind. This is a challenge felt by the entire technology ecosystem. Resolving it starts by building more secure software from the start.
I’d say that smart assistants are fine to continue using, but be careful how, when, and where you’re employing them. Here are a few simple pieces of advice to remember:
- Don’t enable smart assistants on locked devices
- Don’t provide smart assistants with personal information
- When possible, limit the assistant’s permission to applications you’re accessing”
Lee Munson, Security Researcher at Comparitech.com:
“Digital assistants, irrespective of how intelligent (or otherwise) they seem to be, potentially offer up both security and privacy weaknesses. While the former tends to garner the most headlines, especially threats such as the recent discovery by McAfee that Cortana can bypass the lock screen on a Windows device, it is the latter that should be of most concern to consumers.
Even though the ability to unlock or otherwise manipulate a device is a concern, such attacks almost certainly require physical access to the machine in question. Home users would almost certainly have to be a victim of another crime (break-ins are thankfully rare) before that scenario could play out and organisations should, hopefully, have already nailed down physical security and have a visitor procedure in place.
On the privacy front, things are a little scarier as we know that some, but not all, voice assistants are capable of recording audio and then playing it back under command. Good physical security also makes this scenario seem unlikely, but home users may have real concerns about their friends, family or housemates being able to access their recent text messages, for example, and businesses may wish to consider the insider threat before placing such devices in areas where sensitive information could be overheard.
Ultimately, however, voice assistants are as secure and as private as the owner wishes them to be. Disabling the ability to listen in would be an effective mitigation, albeit to the complete detriment of functionality and usability, but careful placement and awareness of their presence before saying anything that shouldn’t be repeated are a simple solution.”
Lane Thames, Senior Security Researcher at Tripwire:
“From a computer or computing device perspective, there’s not much extra exposure than any other system because voice control is just another human input device (HID). Similar to other HIDs, such as a keyboard, operation of the device requires (except for very special cases) one to be within physical proximity. As such, the risk assumed by the voice control attack surface is limited.
However, from an application perspective, the exposure is huge compared to a traditional application such as email or web browsing, and this is due to the “smart assistance” provided by this technology. Almost by definition, an assistant has to perform all kinds of functionality, even functionality that we haven’t implemented yet. All of these assistant technologies such as Cortana, Alexa, and Google Home, generally speaking, have very limited “smartness” local to the device. Instead, the smartness comes from the service’s backend cloud that uses technologies such as Big Data, Artificial Intelligence, Machine Learning, massive search databases, etc. This is where the functionality of the assistant comes from. You might say the assistant is just a messenger.
Plus, the real benefit of gaining virtually any type of assistant functionality you need, just like most of our modern computing trends, comes from a massive collection of developers and technologists that provide third-party services that these devices can use or consume data from the services thereof.
Theses backend services from both providers and third-parties are where the true attack surface for these devices come from, not necessarily the aspect of using voice.
One again, we have to see that voice control in this application is really no different than a keyboard or mouse, and computers can be compromised with these devices too. The worrisome attack surface for these assistants comes from the applications and services it uses in the cloud backend. Since this assistant technology is so new and evolving, we don’t necessarily have all the answers regarding how this technology should be used in a highly secure manner because it is a really complex system.
I want to go back specifically to the Cortana issue. Let’s turn this around and ask: Was CVE-2018-8140 a ‘real’ vulnerability or was it just a design flaw? We would need to know more details about the vulnerability to answer this, but we can think about it. Should Cortana be listening when the screen/system is locked? Should it be listening if you put the computer to sleep? You’ll get different responses from different people who have different use cases. For example, we could conceive of a scenario where we use “voice printing” to authenticate a user who might be blind that needs Cortana to do something for him or her regardless of the system being locked or not. These are design details that are hard to solve universally. In this case, Cortana was doing things when the system was locked that it probably shouldn’t have been doing and Microsoft viewed it seriously enough to be a true vulnerability and not a simple design flaw.
Good security has traditionally been about “if you don’t’ need it, turn that stuff off”. Disabling any non-essential service or system can reduce the attack surface and this process has been used for many years for traditional cybersecurity. As always, this is a question of risk exposure versus the gain one gets from using the underlying technology.”
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.