Show simple item record

dc.contributor.authorBispham, Mary
dc.contributor.authorKalim Sattar, Suliman
dc.contributor.authorZard, Clara
dc.contributor.authorFerrer-Aran, Xavier
dc.contributor.authorEdu, Jide
dc.contributor.authorSuarez-Tangil, Guillermo 
dc.contributor.authorSuch, Jose
dc.date.accessioned2024-01-22T17:57:04Z
dc.date.available2024-01-22T17:57:04Z
dc.date.issued2023-07-19
dc.identifier.urihttps://hdl.handle.net/20.500.12761/1782
dc.description.abstractThis paper investigates the potential for spreading misinformation via third-party voice applications in voice assistant ecosystems such as Amazon Alexa and Google Assistant. Our work fills a gap in prior work on privacy issues associated with third-party voice applications, looking at security issues related to outputs from such applications rather than compromises to privacy from user inputs. We define misinformation in the context of third-party voice applications and implement an infrastructure for testing third-party voice applications using automated natural language interaction. Using our infrastructure, we identify — for the first time — several instances of misinformation in third-party voice applications currently available on the Google Assistant and Amazon Alexa platforms. We then discuss the implications of our work for developing measures to pre-empt the threat of misinformation and other types of harmful content in third-party voice assistants becoming more significant in the future.es
dc.language.isoenges
dc.titleMisinformation in third-party voice applicationses
dc.typeconference objectes
dc.conference.date19-21 July 2023es
dc.conference.placeEindhoven, The Netherlandses
dc.conference.titleInternational Conference on Conversational User Interfaces*
dc.event.typeconferencees
dc.pres.typepaperes
dc.type.hasVersionVoRes
dc.rights.accessRightsopen accesses
dc.description.refereedTRUEes
dc.description.statuspubes


Files in this item

This item appears in the following Collection(s)

Show simple item record