Fluid Transformers and Creative Analogies: Exploring Large Language Models' Capacity for Augmenting Cross-Domain Analogical Creativity

14Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

Abstract

Cross-domain analogical reasoning is a core creative ability that can be challenging for humans. Recent work has shown some proofs-of-concept of Large language Models' (LLMs) ability to generate cross-domain analogies. However, the reliability and potential usefulness of this capacity for augmenting human creative work has received little systematic exploration. In this paper, we systematically explore LLMs capacity to augment cross-domain analogical reasoning. Across three studies, we found: 1) LLM-generated cross-domain analogies were frequently judged as helpful in the context of a problem reformulation task (median 4 out of 5 helpfulness rating), and frequently (∼80% of cases) led to observable changes in problem formulations, and 2) there was an upper bound of ∼25% of outputs being rated as potentially harmful, with a majority due to potentially upsetting content, rather than biased or toxic content. These results demonstrate the potential utility - and risks - of LLMs for augmenting cross-domain analogical creativity.

Cite

CITATION STYLE

APA

Ding, Z., Srinivasan, A., MacNeil, S., & Chan, J. (2023). Fluid Transformers and Creative Analogies: Exploring Large Language Models’ Capacity for Augmenting Cross-Domain Analogical Creativity. In ACM International Conference Proceeding Series (pp. 489–505). Association for Computing Machinery. https://doi.org/10.1145/3591196.3593516

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free