Multiple contextual task recognition for sharing autonomy to assist mobile robot teleoperation

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To efficiently facilitate autonomy sharing for assisting mobile robot teleoperation, in this paper we propose a method to recognize four contextual task types executed by the human operator: doorway crossing, object inspection, wall following and robot docking, which extends our previous approach, where only the first two task types were considered. We employ a set of simple but highly distinctive task features to efficiently describe each task type, which is adopted by a Gaussian Mixture Regression (GMR) model combined with a recursive Bayesian filter (RBF) to infer the most probable task the human operator executes across multiple candidates during operation. We demonstrate the effectiveness of the approach with a variety of tests in a cluttered indoor scenario.

Cite

CITATION STYLE

APA

Gao, M., Schamm, T., & Marius Zöllner, J. (2015). Multiple contextual task recognition for sharing autonomy to assist mobile robot teleoperation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9245, pp. 3–14). Springer Verlag. https://doi.org/10.1007/978-3-319-22876-1_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free