Autocompleting inequality

The latest wave of AI hype has been driven by ‘generative AI’ systems exemplified by ChatGPT, which was created by OpenAI’s ‘fine-tuning’ of a large language model (LLM). This process involves using human labor to provide feedback on generative outputs in order to bring these into greater ‘alignment...

Full description

Saved in:
Bibliographic Details
Main Author: Mike Zajko
Format: Article
Language:English
Published: DIGSUM 2025-05-01
Series:Journal of Digital Social Research
Subjects:
Online Access:https://publicera.kb.se/jdsr/article/view/54879
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The latest wave of AI hype has been driven by ‘generative AI’ systems exemplified by ChatGPT, which was created by OpenAI’s ‘fine-tuning’ of a large language model (LLM). This process involves using human labor to provide feedback on generative outputs in order to bring these into greater ‘alignment’ with ‘safety’. This article analyzes the fine-tuning of generative AI as a process of social ordering, beginning with the encoding of cultural dispositions into LLMs, their containment and redirection into vectors of ‘safety’, and the subsequent challenge of these ‘guard rails’ by users. Fine-tuning becomes a means by which some social hierarchies are reproduced, reshaped, and flattened. By analyzing documentation provided by generative AI developers, I show how fine-tuning makes use of human judgement to reshape the algorithmic reproduction of inequality, while also arguing that the most important values driving AI alignment are commercial imperatives and aligning with political economy.
ISSN:2003-1998