Partially Observable Markov Decision Processes (POMDPs) enable optimized decision making by robots, agents, and other autonomous systems. This quantitative optimization can also be a limitation in human-agent interaction, as the resulting autonomous behavior, while possibly optimal, is often impenetrable to human teammates, leading to improper trust and, subsequently, disuse or misuse of such systems [1].
CITATION STYLE
Wang, N., Pynadath, D. V., Hill, S. G., & Merchant, C. (2017). The dynamics of human-agent trust with POMDP-Generated explanations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10498 LNAI, pp. 459–462). Springer Verlag. https://doi.org/10.1007/978-3-319-67401-8_58
Mendeley helps you to discover research relevant for your work.