Analyzing the Innovative Potential of Texts Generated by Large Language Models: An Empirical Evaluation

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As large language models (LLMs) revolutionize natural language processing tasks, it remains uncertain whether the text they generate can be perceived as innovative by human readers. This question holds significant implications for innovation management, where the generation of novel ideas from extensive text corpora is crucial. In this study, we conduct an empirical evaluation of 2170 generated idea texts, containing product and service ideas in current trends for specific companies, focusing on three key metrics: innovativeness, context, and text quality. Our findings show that, while not universally applicable, a substantial number of LLM-generated ideas exhibit a degree of innovativeness. Remarkably, only 97 texts within the entire corpus were identified as highly innovative. Moving forward, an automated evaluation and filtering system to assess innovativeness could greatly support innovation management by facilitating the pre-selection of generated ideas.

Cite

CITATION STYLE

APA

Krauss, O., Jungwirth, M., Elflein, M., Sandler, S., Altenhofer, C., & Stoeckl, A. (2023). Analyzing the Innovative Potential of Texts Generated by Large Language Models: An Empirical Evaluation. In Communications in Computer and Information Science (Vol. 1872 CCIS, pp. 11–22). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-39689-2_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free