Generative Inferences Based on Learned Relations

12Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

A key property of relational representations is their generativity: From partial descriptions of relations between entities, additional inferences can be drawn about other entities. A major theoretical challenge is to demonstrate how the capacity to make generative inferences could arise as a result of learning relations from non-relational inputs. In the present paper, we show that a bottom-up model of relation learning, initially developed to discriminate between positive and negative examples of comparative relations (e.g., deciding whether a sheep is larger than a rabbit), can be extended to make generative inferences. The model is able to make quasi-deductive transitive inferences (e.g., “If A is larger than B and B is larger than C, then A is larger than C”) and to qualitatively account for human responses to generative questions such as “What is an animal that is smaller than a dog?” These results provide evidence that relational models based on bottom-up learning mechanisms are capable of supporting generative inferences.

Cite

CITATION STYLE

APA

Chen, D., Lu, H., & Holyoak, K. J. (2017). Generative Inferences Based on Learned Relations. Cognitive Science, 41, 1062–1092. https://doi.org/10.1111/cogs.12455

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free