Voice of concern: Smart assistants are creating new openings for hackers

0 Comments

Резултат с изображение за voice assistants

More than a year ago, Amichai Shulman, an adjunct professor at the Technion Israel Institute of Technology, challenged his computer science students to go toe-to-toe with the pros, asking them to find security flaws in Cortana, Microsoft’s voice assistant. 

It didn’t take Shulman’s students long to find alarming vulnerabilities with Cortana, which can be used in laptops, computers, watches and phones. The problems included access that allowed a potential hacker to take over a Windows device using only voice commands and an exploit that could direct a computer to download malware even when it was locked.

‘I took undergraduate students, and in three months, they were able to come up with a whole wealth of vulnerabilities,’ Shulman said.

Shulman’s college assignment, disclosed at the Black Hat cybersecurity conference in Las Vegas on Wednesday, underscores the growing risk voice assistants and smart speakers pose as they show up in more and more homes. In the first quarter of 2018 alone, 9.2 million smart speakers shipped, with the majority of them featuring Amazon’s Alexa or Google Assistant. The market is growing smartly and researchers expect 55 percent of US households to have a digital voice assistant by 2022.

Each of them, it turns out, is a potential gateway for hackers to break into your home.

An Alexa Bug Could Have Exposed Your Voice History to Hackers | WIRED

Too many voices

As Shulman and his partner Tal Be’ery, both security researchers, were finding these Cortana vulnerabilities, researchers from McAfee were independently discovering the same flaws. The attention Cortana was getting highlights the intensity with which researchers are turning their attention to voice assistants.

‘It is too ripe of an environment,’ said Gary Davis, chief consumer security evangelist at McAfee. ‘There are too many of these going into homes for them not to be considered.’ 

Davis says the proliferation of voice assistants raises the likelihood they will be used in attacks in the future.

Microsoft quickly fixed the vulnerabilities that both Shulman and McAfee discovered by disabling the ability to activate Cortana on locked devices.

‘Customers who applied our June 2018 updates are protected against CVE-2018-8140,’ a Microsoft spokesperson said in a statement.

Open talks

Still, the discoveries are the beginning of vulnerabilities for voice assistants.

In the last year, researchers have focused efforts on Amazon’s Echo, which features Alexa, one of the most popular voice assistants available. In April, researchers from security testing firm Checkmarx were able to develop an Alexa app, known as a ‘skill,’ that allowed potential hackers to turn the Echo into a listening device.

Резултат с изображение за voice assistants

Amazon fixed the issue shortly after it was notified.

‘Amazon takes customer security seriously and we have full teams dedicated to ensuring the safety and security of our products,’ an Amazon spokesperson said in a statement. ‘We have taken measures to make Echo secure.’ 

Google didn’t respond to a request for comment.

Last September, researchers from China found that they could use a low-frequency pitch to send commands to voice assistants that humans couldn’t hear.

While many of these vulnerabilities were reported and fixed, more will pop up, said Candid Wueest, a principal threat researcher at Symantec.

‘Skills and actions are probably one of the most prevalent attack vectors we’ll see,’ Wueest said. ‘There will be others that can be found in the future that we probably haven’t even heard of yet.’

In his research, Wueest said he’s seen many different types of attacks targeted at voice assistants. There are some that even rely on people being nice to their voice assistant.

‘If there’s a game called ‘Quiz,’ you can make your own game to be something called ‘Quiz Game Please,” and if someone is asking politely, they might be getting the other application without even knowing,’ he said.

Once a victim’s downloaded the malicious voice application, then the developer would have access to data like voice recordings, which could be used for blackmail, he said.

Shulman’s discovery allowed Cortana to browse to non-secure websites by voice commands. From there, a hacker could deliver an attack because the page lacks encryption.

Even after Microsoft fixed the problem, Shulman said he discovered it again just by saying the commands differently.

‘So, instead of saying ‘Go to BBC.com,’ you would say, ‘Launch BBC,’ and it would open the non-SSL site in the background,’ he said, referring to a type of security for internet connections. ‘We were able to find many, many sentences that repeat the same behavior.’

Curb your voice enthusiasm

Many of the vulnerabilities for voice assistants represent the typical growing pains for an emerging technology.

As more skills and applications continue to pop up, so will openings for potential attacks, Wueest said.

Developers have expressed interest in allowing voice assistants to send payments, and once you get money involved, the Symantec researcher said, cybercriminals will flock to it.

 

Voice assistants are also popping up on nearly every device, to control our television, our cars and our bathrooms. When they’re everywhere, it presents more for security researchers to look into, Davis said.

‘As we get more comfortable with voice assistants, whether they’re embedded in our computers or a device in our homes, the more our guard will be dropped,’ he said.

It’s why Shulman suggested that not everything needs to be done by voice commands.

‘You take a concept that is very helpful with handheld devices, and you try to replicate it,’ Shulman said. ‘In which, it is not extremely helpful, and as we’ve shown, very dangerous.’ 

Leave a Reply

Your email address will not be published. Required fields are marked *

Filter by price

Recent Posts From Our Blog

Newsletter