State-consistency loss for learning spatial perception tasks from partial labels

5Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

When learning models for real-world robot spatial perception tasks, one might have access only to partial labels: this occurs for example in semi-supervised scenarios (in which labels are not available for a subset of the training instances) or in some types of self-supervised robot learning (where the robot autonomously acquires a labeled training set, but only acquires labels for a subset of the output variables in each instance). We introduce a general approach to deal with this class of problems using an auxiliary loss enforcing the expectation that the perceived environment state should not abruptly change; then, we instantiate the approach to solve two robot perception problems: a simulated ground robot learning long-range obstacle mapping as a 400-binary-label classification task in a self-supervised way in a static environment; and a real nano-quadrotor learning human pose estimation as a 3-variable regression task in a semi-supervised way in a dynamic environment. In both cases, our approach yields significant quantitative performance improvements (average increase of 6 AUC percentage points in the former; relative improvement of the R2 metric ranging from 7% to 33% in the latter) over baselines.

Cite

CITATION STYLE

APA

Nava, M., Gambardella, L. M., & Giusti, A. (2021). State-consistency loss for learning spatial perception tasks from partial labels. IEEE Robotics and Automation Letters, 6(2), 1112–1119. https://doi.org/10.1109/LRA.2021.3056378

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free