System transparency in shared autonomy: A mini review

67Citations
Citations of this article
159Readers
Mendeley users who have this article in their library.

Abstract

What does transparency mean in a shared autonomy framework? Different ways of understanding system transparency in human-robot interaction can be found in the state of the art. In one of the most common interpretations of the term, transparency is the observability and predictability of the system behavior, the understanding of what the system is doing, why, and what it will do next. Since the main methods to improve this kind of transparency are based on interface design and training, transparency is usually considered a property of such interfaces, while natural language explanations are a popular way to achieve transparent interfaces. Mechanical transparency is the robot capacity to follow human movements without human-perceptible resistive forces. Transparency improves system performance, helping reduce human errors, and builds trust in the system. One of the principles of user-centered design is to keep the user aware of the state of the system: a transparent design is a user-centered design. This article presents a review of the definitions and methods to improve transparency for applications with different interaction requirements and autonomy degrees, in order to clarify the role of transparency in shared autonomy, as well as to identify research gaps and potential future developments.

Cite

CITATION STYLE

APA

Alonso, V., & De La Puente, P. (2018, November 30). System transparency in shared autonomy: A mini review. Frontiers in Neurorobotics. Frontiers Media S.A. https://doi.org/10.3389/fnbot.2018.00083

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free