Abstract
This paper presents novel attacks on voice-controlled digital assistants using nonsensical word sequences. We present the results of a small-scale experiment which demonstrates that it is possible for malicious actors to gain covert access to a voice-controlled system by hiding commands in apparently nonsensical sounds of which the meaning is opaque to humans. Several instances of nonsensical word sequences were identified which triggered a target command in a voice-controlled digital assistant, but which were incomprehensible to humans, as shown in tests with human experimental subjects. Our work confirms the potential for hiding malicious voice commands to voice-controlled digital assistants or other speech-controlled devices in speech sounds which are perceived by humans as nonsensical. This paper also develops a novel attack concept which involves gaining unauthorised access to a voice-controlled system using apparently unrelated utterances. We present the results of a proof-of-concept study showing that it is possible to trigger actions in a voice-controlled digital assistant using utterances which are accepted by the system as a target command despite having a different meaning to the command in terms of human understanding.
Author supplied keywords
Cite
CITATION STYLE
Bispham, M. K., Agrafiotis, I., & Goldsmith, M. (2019). Nonsense Attacks on Google Assistant and Missense Attacks on Amazon Alexa. In International Conference on Information Systems Security and Privacy (pp. 75–87). Science and Technology Publications, Lda. https://doi.org/10.5220/0007309500750087
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.