Voice is the next big medium of interaction for many technology companies, as they seek to break down intrinsic barriers of using technology in developing countries, through products like AI assistants and smart speakers. Even as our daily lives get more interconnected, literacy is big barrier when it comes to growth of internet-enabled services. This is where voice comes in to level the playing field to some extent.

But even that’s not without its pitfalls. A team of researchers at University of California at Berkeley have published a research paper which suggests that it’s possible to embed hidden voice commands within recordings of music or speech, to control some smart assistants.

Apple HomePod Speakers, Powered by Siri
Apple HomePod Speakers, Powered by Siri

 

The commands can control popular voice assistants like Alexa and Siri without the human listeners-on hearing any direct commands being issued. According to a recent report from the New York Times, the researchers had previously demonstrated that they could hide commands in white noise and YouTube videos to control smart devices remotely.

Essentially, when such a video or recording is played the voice assistant will hear specific commands, while the user just hears someone talking or a song playing.

Last year, researchers at Princeton University and China’s Zhejian University had demonstrated another such exploit dubbed DolphinAttack. The exploit made use of ultrasonic sounds to attack voice recognition systems in popular digital assistants. The attacks could be used to instruct smart devices to visit malicious websites, make phone calls, take a picture or send text messages, however, it had its limitations. The attack could only be carried out if the ultrasonic transmitter was close to the receiving device, but experts had warned that more powerful systems were a possibility.

Talking about the exploit, Nicholas Carlini, a fifth-year PhD student at UC Berkeley and one of the co-authors of the paper, said that the team just wanted to see if they could make the previously demonstrated exploit even more stealthy. When asked if such an exploit could already be found in the wild, Carlini said, “My assumption is that the malicious people already employ people to do what I do”.