Detecting hypothesis space misspecification in robot learning from human input

0Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Learning from human input has enabled autonomous agents to perform increasingly more complex tasks that are otherwise difficult to carry out automatically. To this end, recent works have studied how robots can incorporate such input-like demonstrations or corrections-into objective functions describing the desired behaviors. While these methods have shown progress in a variety of settings, from semi-autonomous driving, to household robotics, to automated airplane control, they all suffer from the same crucial drawback: they implicitly assume that the person's intentions can always be captured by the robot's hypothesis space. We call attention to the fact that this assumption is often unrealistic, as no model can completely account for every single possible situation ahead of time. When the robot's hypothesis space is misspecified, human input can be unhelpful-or even detrimental-to the way the robot is performing its tasks. Our work tackles this issue by proposing that the robot should first explicitly reason about how well its hypothesis space can explain human inputs, then use that situational confidence to inform how it should incorporate them.

Cite

CITATION STYLE

APA

Bobu, A. (2020). Detecting hypothesis space misspecification in robot learning from human input. In ACM/IEEE International Conference on Human-Robot Interaction (pp. 555–557). IEEE Computer Society. https://doi.org/10.1145/3371382.3377436

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free