In-Context Learning in Large Language Models (LLMs): Mechanisms, Capabilities, and Implications for Advanced Knowledge Representation and Reasoning

The rapid growth of Large Language Models (LLMs) and their in-context learning (ICL) capabilities has significantly transformed paradigms in artificial intelligence (AI) and natural language processing. Notable models, such as OpenAI’s GPT series, have demonstrated previously unprecedente...

Full description

Saved in:
Bibliographic Details
Main Authors: Azza Mohamed, Mohamed El Rashid, Khaled Shaalan
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11018434/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The rapid growth of Large Language Models (LLMs) and their in-context learning (ICL) capabilities has significantly transformed paradigms in artificial intelligence (AI) and natural language processing. Notable models, such as OpenAI’s GPT series, have demonstrated previously unprecedented advancements in verbal comprehension and adaptability, dynamically responding to new tasks offered via contextual prompts. This study provides a detailed survey of recent advances in theoretical research on LLMs and ICL. The search was conducted across several scholarly databases including Google Scholar, arXiv, IEEE Xplore, ACM Digital Library, and SpringerLink, covering publications from January 2019 to March 2024. We investigate how LLMs encode and use knowledge via ICL, the evolving reasoning skills that result from this process, and the considerable impact of prompt design on LLM reasoning performance, particularly in symbolic reasoning tasks. Furthermore, we investigate the theoretical frameworks that explain or challenge LLM behaviors in ICL contexts and address the significance of these findings for the development of complex knowledge representation and reasoning systems. Using a systematic methodology consistent with accepted research criteria, this review synthesizes significant observations, highlights existing gaps and obstacles, and discusses implications for future research and practice. Our goal is to connect theoretical ideas with actual advances in Artificial Intelligence, ultimately contributing to the continuing discussion about the capabilities and applications of LLMs in knowledge representation and reasoning.
ISSN:2169-3536