Intention-Related Natural Language Grounding via Object Affordance Detection and Intention Semantic Extraction

7Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Similar to specific natural language instructions, intention-related natural language queries also play an essential role in our daily life communication. Inspired by the psychology term “affordance” and its applications in Human-Robot interaction, we propose an object affordance-based natural language visual grounding architecture to ground intention-related natural language queries. Formally, we first present an attention-based multi-visual features fusion network to detect object affordances from RGB images. While fusing deep visual features extracted from a pre-trained CNN model with deep texture features encoded by a deep texture encoding network, the presented object affordance detection network takes into account the interaction of the multi-visual features, and reserves the complementary nature of the different features by integrating attention weights learned from sparse representations of the multi-visual features. We train and validate the attention-based object affordance recognition network on a self-built dataset in which a large number of images originate from MSCOCO and ImageNet. Moreover, we introduce an intention semantic extraction module to extract intention semantics from intention-related natural language queries. Finally, we ground intention-related natural language queries by integrating the detected object affordances with the extracted intention semantics. We conduct extensive experiments to validate the performance of the object affordance detection network and the intention-related natural language queries grounding architecture.

Cite

CITATION STYLE

APA

Mi, J., Liang, H., Katsakis, N., Tang, S., Li, Q., Zhang, C., & Zhang, J. (2020). Intention-Related Natural Language Grounding via Object Affordance Detection and Intention Semantic Extraction. Frontiers in Neurorobotics, 14. https://doi.org/10.3389/fnbot.2020.00026

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free