Resumen
The boom in generative artificial intelligence (AI) and continuing growth of Voice Assistants (VAs) suggests their trajectories will converge. This conjecture aligns with the development of AI-driven conversational agents, aiming to utilise advance natural language processing (NLP) methods to enhance the capabilities of voice assistants. However, design guidelines for VAs prioritise maximum efficiency by advocating for the use of concise answers. This poses a conflict with the challenges around generative AI, such as inaccuracies and misinterpretation, as shorter responses may not adequately provide users with meaningful information. AI-VA systems can adapt drivers of trust formation, such as references and authorship, to improve credibility. A better understanding of user behaviour when using the system is needed to develop revised design recommendations for AI-powered VA systems. This paper reports an online survey of 256 participants residing in the U.K. and nine follow-up interviews, where user behaviour is investigated to identify drivers of trust in the context of obtaining digital information from a generative AI-based VA system. Adding references is promising as a tool for increasing trust in systems producing text, yet we found no evidence that the inclusion of references in a VA response contributed towards the perceived reliability or trust towards the system. We examine further variables driving user trust in AI-powered VA systems.
Idioma original | Inglés |
---|---|
Páginas | 110-119 |
Número de páginas | 10 |
DOI | |
Estado | Publicada - 2023 |
Publicado de forma externa | Sí |
Evento | 36th Annual British Human-Computer Interaction Conference, HCI 2023 - York, Reino Unido Duración: 28 ago. 2023 → 29 ago. 2023 |
Conferencia
Conferencia | 36th Annual British Human-Computer Interaction Conference, HCI 2023 |
---|---|
País/Territorio | Reino Unido |
Ciudad | York |
Período | 28/08/23 → 29/08/23 |