Robots are added to human teams to increase the team's skills or capabilities. To gain the acceptance of the human teammates, it may be important for the robot to behave in a manner that the teammates consider trustworthy. We present an approach that allows a robot's behavior to be adapted so that it behaves in a trustworthy manner. The adaptation is guided by an inverse trust metric that the robot uses to estimate the trust a human teammate has in it. We evaluate our method in a simulated robotics domains and demonstrate how the agent can adapt to a teammate's preferences. © 2014 Springer International Publishing.
CITATION STYLE
Floyd, M. W., Drinkwater, M., & Aha, D. W. (2014). Adapting autonomous behavior using an inverse trust estimation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8579 LNCS, pp. 728–742). Springer Verlag. https://doi.org/10.1007/978-3-319-09144-0_50
Mendeley helps you to discover research relevant for your work.