Researchers Found 1000 Words That Can “Accidentally” Trigger Smart Speakers

smart speaker accidentally activate feat.

Although smart speakers like Amazon Alexa, Google Home, and Apple HomePod can ease our daily life, there are also some privacy issues that come with them. For instance, as smart speakers use specific trigger words to activate, some times they can accidentally activate if they hear something similar to the trigger words. So, now researchers have found a thousand words that can “accidentally” trigger a smart speaker.

A team of researchers from Germany’s Ruhr-Universität Bochum and the Max Planck Institute for Cyber Security and Privacy recently conducted an experiment. Through this experiment, they have found almost 1000 words that can accidentally trigger a smart speaker to listen to their humans.

The researchers took devices with smart assistants like Alexa, Siri, and Google Assistant and three other voice assistants which are exclusive to the Chinese markets. They then turned on the speakers and put them in a room one by one. Now, in the room, there was a TV on which episodes of popular TV series like Game of Thrones, House of Cards, and Modern Family were playing.

So, while the episodes played on the TV, the researchers waited for the virtual assistants to activate. Now, to observe when the devices are getting triggered, the researchers used an LED light that turned on every time the device activated.

Once the assistant in a device gets activated, it uses local speech analysis software to detect if the words were actually uttered to activate it. Now, if the device concludes that the words were actually to trigger the assistant, it sends a recording of the clip to the company’s cloud servers for further analysis.

Good from Engineering Perspective, Bad for Privacy

Now, according to the researchers, the developers of these smart speakers have intentionally programmed numerous words that can activate the integrated voice assistant. These words may not be the actual trigger words, but they can activate the assistant right away.

As per Dorothea Kolossa, one of the researchers in the team, “the devices are intentionally programmed in a somewhat forgiving manner, because they are supposed to be able to understand their humans”.

According to another researcher who is also a Professor of Ruhr-Universität Bochum, Thorsten Holz, “From a privacy point of view, this is of course alarming, because sometimes very private conversations can end up with strangers. From an engineering point of view, however, this approach is quite understandable, because the systems can only be improved using such data. The manufacturers have to strike a balance between data protection and technical optimization,”

SOURCE WION
comment Comments 1
  • Tora says:

    Didn’t they provide those words ?

Leave a Reply