How much do you trust me? Learning a case-based model of inverse trust

8Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Robots can be important additions to human teams if they improve team performance by providing new skills or improving existing skills. However, to get the full benefits of a robot the team must trust and use it appropriately. We present an agent algorithm that allows a robot to estimate its trustworthiness and adapt its behavior in an attempt to increase trust. It uses case-based reasoning to store previous behavior adaptations and uses this information to perform future adaptations. We compare case-based behavior adaptation to behavior adaptation that does not learn and show it significantly reduces the number of behaviors that need to be evaluated before a trustworthy behavior is found. Our evaluation is in a simulated robotics environment and involves a movement scenario and a patrolling/threat detection scenario.

Cite

CITATION STYLE

APA

Floyd, M. W., Drinkwater, M., & Aha, D. W. (2014). How much do you trust me? Learning a case-based model of inverse trust. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8765, 125–139. https://doi.org/10.1007/978-3-319-11209-1_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free