Misinformation in third-party voice applications
Fecha
2023-07-19Resumen
This paper investigates the potential for spreading misinformation via third-party voice applications in voice assistant ecosystems such as Amazon Alexa and Google Assistant. Our work fills a gap in prior work on privacy issues associated with third-party voice applications, looking at security issues related to outputs from such applications rather than compromises to privacy from user inputs. We define misinformation in the context of third-party voice applications and implement an infrastructure for testing third-party voice applications using automated natural language interaction. Using our infrastructure, we identify — for the first time — several instances of misinformation in third-party voice applications currently available on the Google Assistant and Amazon Alexa platforms. We then discuss the implications of our work for developing measures to pre-empt the threat of misinformation and other types of harmful content in third-party voice assistants becoming more significant in the future.