Measuring Trust in Children's Speech: Towards Responsible Robot-Supported Information Search

0Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Children use conversational agents, such as Alexa or Siri, to search for information, but also tend to trust these agents which might influence their information assessment. It is challenging for children to assess the veracity of information retrieved from the internet and social media, possibly more so when they trust a voice agent excessively. In this project, I propose to design child-robot interactions to empower children to have a critical attitude by implementing real-time trust monitoring and robot behavioural interventions in cases of high trust. First, we need to be able to measure children's level of trust in the robot real-time during the interaction, to reason about when excessive trust may be occurring. Second, we need to study what behavioural interventions by the robot foster critical attitudes toward the provided information. By adapting the robot's behavior when excessive trust occurs, I aim to contribute to more responsible interactions between children and robots.

Cite

CITATION STYLE

APA

Velner, E. (2023). Measuring Trust in Children’s Speech: Towards Responsible Robot-Supported Information Search. In ACM/IEEE International Conference on Human-Robot Interaction (pp. 748–750). IEEE Computer Society. https://doi.org/10.1145/3568294.3579973

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free