Toward establishing trust in adaptive agents

  • Glass A
  • McGuinness D
  • Wolverton M
  • 88


    Mendeley users who have this article in their library.
  • 65


    Citations of this article.


As adaptive agents become more complex and take increasing autonomy in their user’s lives, it becomes more important for users to trust and understand these agents. Little work has been done, however, to study what factors influence the level of trust users are willing to place in these agents. Without trust in the actions and results produced by these agents, their use and adoption as trusted assistants and partners will be severely limited. We present the results of a study among test users of CALO, one such complex adaptive agent system, to investigate themes surrounding trust and understandability. We identify and discuss eight major themes that significantly impact user trust in complex systems. We further provide guidelines for the design of trustable adaptive agents. Based on our analysis of these results, we conclude that the availability of explanation capabilities in these agents can address the majority of trust concerns identified by users.

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Get full text


  • Alyssa Glass

  • Deborah L. McGuinness

  • Michael Wolverton

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free