Language-guided adaptive perception for efficient grounded communication with robotic manipulators in cluttered environments

5Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.

Abstract

The utility of collaborative manipulators for shared tasks is highly dependent on the speed and accuracy of communication between the human and the robot. The run-time of recently developed probabilistic inference models for situated symbol grounding of natural language instructions depends on the complexity of the representation of the environment in which they reason. As we move towards more complex bi-directional interactions, tasks, and environments, we need intelligent perception models that can selectively infer precise pose, semantics, and affordances of the objects when inferring exhaustively detailed world models is inefficient and prohibits real-time interaction with these robots. In this paper we propose a model of language and perception for the problem of adapting the configuration of the robot perception pipeline for tasks where constructing exhaustively detailed models of the environment is inefficient and inconsequential for symbol grounding. We present experimental results from a synthetic corpus of natural language instructions for robot manipulation in example environments. The results demonstrate that by adapting perception we get significant gains in terms of run-time for perception and situated symbol grounding of the language instructions without a loss in the accuracy of the latter.

References Powered by Scopus

Random sample consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography

21615Citations
N/AReaders
Get full text

3D is here: Point Cloud Library (PCL)

4138Citations
N/AReaders
Get full text

The symbol grounding problem

2575Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Inferring compact representations for efficient natural language understanding of robot instructions

19Citations
N/AReaders
Get full text

Visually Grounded Language Learning: a Review of Language Games, Datasets, Tasks, and Models

4Citations
N/AReaders
Get full text

Language Guided Temporally Adaptive Perception for Efficient Natural Language Grounding in Cluttered Dynamic Worlds

1Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Patki, S., & Howard, T. M. (2018). Language-guided adaptive perception for efficient grounded communication with robotic manipulators in cluttered environments. In SIGDIAL 2018 - 19th Annual Meeting of the Special Interest Group on Discourse and Dialogue - Proceedings of the Conference (pp. 151–160). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-5016

Readers over time

‘19‘20‘21‘22‘23‘2408162432

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 19

70%

Researcher 5

19%

Lecturer / Post doc 2

7%

Professor / Associate Prof. 1

4%

Readers' Discipline

Tooltip

Computer Science 22

71%

Linguistics 5

16%

Engineering 3

10%

Neuroscience 1

3%

Save time finding and organizing research with Mendeley

Sign up for free
0