Transparent intent for explainable shared control in assistive robotics

3Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

Abstract

Robots supplied with the ability to infer human intent have many applications in assistive robotics. In these applications, robots rely on accurate models of human intent to administer appropriate assistance. However, the effectiveness of this assistance also heavily depends on whether the human can form accurate mental models of robot behaviour. The research problem is to therefore establish a transparent interaction, such that both the robot and human understand each other's underlying “intent”. We situate this problem in our Explainable Shared Control paradigm and present ongoing efforts to achieve transparency in human-robot collaboration.

Cite

CITATION STYLE

APA

Zolotas, M., & Demiris, Y. (2020). Transparent intent for explainable shared control in assistive robotics. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 5184–5185). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/732

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free