DolphinAttack could allow hackers to take over AI voice assistants
DolphinAttack could allow hackers to take over AI voice assistants

Scientists in China have found that ultrasound frequencies that human ears cannot perceive, could be used to issue commands to smart home assistants, such as Alexa, Siri and Cortana.

Dubbed DolphinAttack, researchers at Zhejiang University said in a research paper, that they managed to successfully test attacks on several products, including Alexa, Cortana, Google Now,  Huawei HiVoice, Samsung S Voice, and Siri.

The scientists developed a program to turn normal voice commands into frequencies too high for humans to hear using $3 of equipment, including an external battery, an amplifier, and an ultrasonic transducer. 

The attacks managed to get an iPhone to make a call or open FaceTime. They also managed to order an Amazon Echo to open a door to a house. Scientists demoed the attack in a YouTube video.

"By injecting a sequence of inaudible voice commands, we show a few proof-of-concept attacks, which include activating Siri to initiate a FaceTime call on iPhone, activating Google Now to switch the phone to the airplane mode, and even manipulating the navigation system in an Audi automobile. We propose hardware and software defense solutions," said the scientists.

However, the attack can only work if the target device is less than two metres of the ultrasonic transmitter. The device must also be unlocked with the voice assistant activated.

To prevent such attacks, scientists urged smart home devices to stop reacting to commands in ultrasound.

 “We propose hardware and software defence solutions. We validate that it is feasible to detect DolphinAttack by classifying the audios using supported vector machine (SVM), and suggest to re-design voice controllable systems to be resilient to inaudible voice command attacks,” they said.

Ofer Maor, director of enterprise solutions at Synopsys, told SC Media UK that while many of the commands we use with Alexa, Siri, and similar devices may not have a real impact of being hacked in this way, the more we integrate these devices with our smart homes, the more such attacks may become an issue.

“For instance, we see more and more “smart locks” offering voice integration with commands such as “Alexa, open the door”,” he said.

“While the convenience factor here is clear, being able to send such embedded commands could allow us to open the door or a gate or any other sort of mechanism designed to deter intruders. Another layer of such commands is integration with security cameras and security alarms, which again, can be turned on and off as well as configured via voice commands.”

Laurie Mercer, senior solution architect, International at Veracode, said that experts have long professed the importance of building Internet of Things (IoT) devices as secure by design. ‘Things' that use voice recognition, such as Siri, Alexa and other digital assistants, must always listen for voice commands in order to be functional. This introduces a relatively new attack vector – audio waves.

“It is likely that audio and voice based security controls will evolve as security researchers and hackers begin to explore vulnerabilities, like DolphinAttack, in this new channel. Building in security by design and the ability to adapt to new threats will help IoT producers use security as a competitive advantage,” he said.

Pedro Abreu, senior vice president and chief strategy officer at ForeScout Technologies, said that DolphinAttack, is able to silently give commands to IoT devices that users and, more importantly, device manufactures likely never predicted could happen.

“IoT devices are inherently insecure – while manufacturers are trying to add security to their products, few design security from scratch. This hack in particular was not caused by flawed code and would therefore be almost impossible to detect until the device's suspicious behaviour was noticed,” he said.

“This is why the burden to secure IoT devices will fall on the enterprise security organisations. They will need to look to security technologies that work at the network level and provide full visibility of their IoT devices and the behaviors of those devices while connected to the network. This provides enterprises the ability to detect when a device behaves suspiciously and automatically limit the actions the device can take based on the severity of the problem."