Abstract
Learning from human input has enabled autonomous agents to perform increasingly more complex tasks that are otherwise difficult to carry out automatically. To this end, recent works have studied how robots can incorporate such input-like demonstrations or corrections-into objective functions describing the desired behaviors. While these methods have shown progress in a variety of settings, from semi-autonomous driving, to household robotics, to automated airplane control, they all suffer from the same crucial drawback: they implicitly assume that the person's intentions can always be captured by the robot's hypothesis space. We call attention to the fact that this assumption is often unrealistic, as no model can completely account for every single possible situation ahead of time. When the robot's hypothesis space is misspecified, human input can be unhelpful-or even detrimental-to the way the robot is performing its tasks. Our work tackles this issue by proposing that the robot should first explicitly reason about how well its hypothesis space can explain human inputs, then use that situational confidence to inform how it should incorporate them.
Author supplied keywords
Cite
CITATION STYLE
Bobu, A. (2020). Detecting hypothesis space misspecification in robot learning from human input. In ACM/IEEE International Conference on Human-Robot Interaction (pp. 555–557). IEEE Computer Society. https://doi.org/10.1145/3371382.3377436
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.