SpatialSim: Recognizing Spatial Configurations of Objects With Graph Neural Networks

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An embodied, autonomous agent able to set its own goals has to possess geometrical reasoning abilities for judging whether its goals have been achieved, namely it should be able to identify and discriminate classes of configurations of objects, irrespective of its point of view on the scene. However, this problem has received little attention so far in the deep learning literature. In this paper we make two key contributions. First, we propose SpatialSim (Spatial Similarity), a novel geometrical reasoning diagnostic dataset, and argue that progress on this benchmark would allow for diagnosing more principled approaches to this problem. This benchmark is composed of two tasks: “Identification” and “Discrimination,” each one instantiated in increasing levels of difficulty. Secondly, we validate that relational inductive biases—exhibited by fully-connected message-passing Graph Neural Networks (MPGNNs)—are instrumental to solve those tasks, and show their advantages over less relational baselines such as Deep Sets and unstructured models such as Multi-Layer Perceptrons. We additionally showcase the failure of high-capacity CNNs on the hard Discrimination task. Finally, we highlight the current limits of GNNs in both tasks.

Cite

CITATION STYLE

APA

Teodorescu, L., Hofmann, K., & Oudeyer, P. Y. (2022). SpatialSim: Recognizing Spatial Configurations of Objects With Graph Neural Networks. Frontiers in Artificial Intelligence, 4. https://doi.org/10.3389/frai.2021.782081

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free