Learning numerosity representations with transformers: Number generation tasks and out-of-distribution generalization

3Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

One of the most rapidly advancing areas of deep learning research aims at creating models that learn to disentangle the latent factors of variation from a data distribution. However, modeling joint probability mass functions is usually prohibitive, which motivates the use of conditional models assuming that some information is given as input. In the domain of numerical cognition, deep learning architectures have successfully demonstrated that approximate numerosity representations can emerge in multi-layer networks that build latent representations of a set of images with a varying number of items. However, existing models have focused on tasks requiring to conditionally estimate numerosity information from a given image. Here, we focus on a set of much more challenging tasks, which require to conditionally generate synthetic images containing a given number of items. We show that attention-based architectures operating at the pixel level can learn to produce well-formed images approximately containing a specific number of items, even when the target numerosity was not present in the training distribution.

Cite

CITATION STYLE

APA

Boccato, T., Testolin, A., & Zorzi, M. (2021). Learning numerosity representations with transformers: Number generation tasks and out-of-distribution generalization. Entropy, 23(7). https://doi.org/10.3390/e23070857

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free