Drawing from the resources of psychoanalysis and critical media studies, in this article we develop an analysis of large language models (LLMs) as ‘automated subjects’. We argue the intentional fictional projection of subjectivity onto LLMs can yield an alternate frame through which artificial intelligence (AI) behaviour, including its productions of bias and harm, can be analysed. First, we introduce language models, discuss their significance and risks, and outline our case for interpreting model design and outputs with support from psychoanalytic concepts. We trace a brief history of language models, culminating with the releases, in 2022, of systems that realise ‘state-of-the-art’ natural language processing performance. We engage with one such system, OpenAI's InstructGPT, as a case study, detailing the layers of its construction and conducting exploratory and semi-structured interviews with chatbots. These interviews probe the model's moral imperatives to be ‘helpful’, ‘truthful’ and ‘harmless’ by design. The model acts, we argue, as the condensation of often competing social desires, articulated through the internet and harvested into training data, which must then be regulated and repressed. This foundational structure can however be redirected via prompting, so that the model comes to identify with, and transfer, its commitments to the immediate human subject before it. In turn, these automated productions of language can lead to the human subject projecting agency upon the model, effecting occasionally further forms of countertransference. We conclude that critical media methods and psychoanalytic theory together offer a productive frame for grasping the powerful new capacities of AI-driven language systems.
CITATION STYLE
Magee, L., Arora, V., & Munn, L. (2023). Structured like a language model: Analysing AI as an automated subject. Big Data and Society, 10(2). https://doi.org/10.1177/20539517231210273
Mendeley helps you to discover research relevant for your work.