The role of LLMs in theory building

Large linguistic models, such as GPT-3.5 and subsequent versions (e.g. GPT-4 or GPT-4o), have shown impressive abilities in generating human-like text and performing a variety of natural language processing tasks. However, a fundamental question in the field of artificial intelligence is whether the...

Full description

Saved in:
Bibliographic Details
Main Author: Aníbal M. Astobiza
Format: Article
Language:English
Published: Elsevier 2025-01-01
Series:Social Sciences and Humanities Open
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2590291125003456
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Large linguistic models, such as GPT-3.5 and subsequent versions (e.g. GPT-4 or GPT-4o), have shown impressive abilities in generating human-like text and performing a variety of natural language processing tasks. However, a fundamental question in the field of artificial intelligence is whether these models can truly represent meaning and assist scientists in building scientific theories. This paper aims to address this question by conducting a thorough conceptual analysis of existing large linguistic models and their capabilities in representing and reasoning about meaning for the purpose of theory building. My conclusions suggest that while these models have made significant progress in representing and manipulating language, they still face limitations in their ability to represent abstract and complex concepts, and the application of these models in building scientific theories should be guided by specific research questions and informed hypotheses that can be tested and developed into robust theories.
ISSN:2590-2911