Ethical implications of text generation in the age of artificial intelligence

58Citations
Citations of this article
204Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We are at a turning point in the debate on the ethics of Artificial Intelligence (AI) because we are witnessing the rise of general-purpose AI text agents such as GPT-3 that can generate large-scale highly refined content that appears to have been written by a human. Yet, a discussion on the ethical issues related to the blurring of the roles between humans and machines in the production of content in the business arena is lacking. In this conceptual paper, drawing on agenda setting theory and stakeholder theory, we challenge the current debate on the ethics of AI and aim to stimulate studies that develop research around three new challenges of AI text agents: automated mass manipulation and disinformation (i.e., fake agenda problem), massive low-quality content production (i.e., lowest denominator problem) and the creation of a growing buffer in the communication between stakeholders (i.e., the mediation problem).

Cite

CITATION STYLE

APA

Illia, L., Colleoni, E., & Zyglidopoulos, S. (2023). Ethical implications of text generation in the age of artificial intelligence. Business Ethics, Environment and Responsibility, 32(1), 201–210. https://doi.org/10.1111/beer.12479

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free