Robots supplied with the ability to infer human intent have many applications in assistive robotics. In these applications, robots rely on accurate models of human intent to administer appropriate assistance. However, the effectiveness of this assistance also heavily depends on whether the human can form accurate mental models of robot behaviour. The research problem is to therefore establish a transparent interaction, such that both the robot and human understand each other's underlying “intent”. We situate this problem in our Explainable Shared Control paradigm and present ongoing efforts to achieve transparency in human-robot collaboration.
CITATION STYLE
Zolotas, M., & Demiris, Y. (2020). Transparent intent for explainable shared control in assistive robotics. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 5184–5185). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/732
Mendeley helps you to discover research relevant for your work.