Testing Deep Neural Networks on the Same-Different Task

Citations of this article
Mendeley users who have this article in their library.
Get full text


Developing abstract reasoning abilities in neural networks is an important goal towards the achievement of human-like performances on many tasks. As of now, some works have tackled this problem, developing ad-hoc architectures and reaching overall good generalization performances. In this work we try to understand to what extent state-of-The-Art convolutional neural networks for image classification are able to deal with a challenging abstract problem, the so-called same-different task. This problem consists in understanding if two random shapes inside the same image are the same or not. A recent work demonstrated that simple convolutional neural networks are almost unable to solve this problem. We extend their work, showing that ResNet-inspired architectures are able to learn, while VGG cannot converge. In light of this, we suppose that residual connections have some important role in the learning process, while the depth of the network seems not so relevant. In addition, we carry out some targeted tests on the converged architectures to figure out to what extent they are able to generalize to never seen patterns. However, further investigation is needed in order to understand what are the architectural peculiarities and limits as far as abstract reasoning is concerned.




Messina, N., Amato, G., Carrara, F., Falchi, F., & Gennaro, C. (2019). Testing Deep Neural Networks on the Same-Different Task. In Proceedings - International Workshop on Content-Based Multimedia Indexing (Vol. 2019-September). IEEE Computer Society. https://doi.org/10.1109/CBMI.2019.8877412

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free