In-Context Learning in Large Language Models (LLMs): Mechanisms, Capabilities, and Implications for Advanced Knowledge Representation and Reasoning
The rapid growth of Large Language Models (LLMs) and their in-context learning (ICL) capabilities has significantly transformed paradigms in artificial intelligence (AI) and natural language processing. Notable models, such as OpenAI’s GPT series, have demonstrated previously unprecedente...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11018434/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850132640843169792 |
|---|---|
| author | Azza Mohamed Mohamed El Rashid Khaled Shaalan |
| author_facet | Azza Mohamed Mohamed El Rashid Khaled Shaalan |
| author_sort | Azza Mohamed |
| collection | DOAJ |
| description | The rapid growth of Large Language Models (LLMs) and their in-context learning (ICL) capabilities has significantly transformed paradigms in artificial intelligence (AI) and natural language processing. Notable models, such as OpenAI’s GPT series, have demonstrated previously unprecedented advancements in verbal comprehension and adaptability, dynamically responding to new tasks offered via contextual prompts. This study provides a detailed survey of recent advances in theoretical research on LLMs and ICL. The search was conducted across several scholarly databases including Google Scholar, arXiv, IEEE Xplore, ACM Digital Library, and SpringerLink, covering publications from January 2019 to March 2024. We investigate how LLMs encode and use knowledge via ICL, the evolving reasoning skills that result from this process, and the considerable impact of prompt design on LLM reasoning performance, particularly in symbolic reasoning tasks. Furthermore, we investigate the theoretical frameworks that explain or challenge LLM behaviors in ICL contexts and address the significance of these findings for the development of complex knowledge representation and reasoning systems. Using a systematic methodology consistent with accepted research criteria, this review synthesizes significant observations, highlights existing gaps and obstacles, and discusses implications for future research and practice. Our goal is to connect theoretical ideas with actual advances in Artificial Intelligence, ultimately contributing to the continuing discussion about the capabilities and applications of LLMs in knowledge representation and reasoning. |
| format | Article |
| id | doaj-art-2891e3eb43fc4a2eae19ad91a53dcdc3 |
| institution | OA Journals |
| issn | 2169-3536 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-2891e3eb43fc4a2eae19ad91a53dcdc32025-08-20T02:32:10ZengIEEEIEEE Access2169-35362025-01-0113955749559310.1109/ACCESS.2025.357530311018434In-Context Learning in Large Language Models (LLMs): Mechanisms, Capabilities, and Implications for Advanced Knowledge Representation and ReasoningAzza Mohamed0https://orcid.org/0000-0002-1244-4448Mohamed El Rashid1https://orcid.org/0009-0002-7106-1938Khaled Shaalan2https://orcid.org/0000-0003-0823-8390Faculty of Engineering and Computing, Liwa University, Al Ain, United Arab EmiratesImam Malik College, Dubai, United Arab EmiratesFaculty of Engineering and IT, The British University in Dubai, Dubai, United Arab EmiratesThe rapid growth of Large Language Models (LLMs) and their in-context learning (ICL) capabilities has significantly transformed paradigms in artificial intelligence (AI) and natural language processing. Notable models, such as OpenAI’s GPT series, have demonstrated previously unprecedented advancements in verbal comprehension and adaptability, dynamically responding to new tasks offered via contextual prompts. This study provides a detailed survey of recent advances in theoretical research on LLMs and ICL. The search was conducted across several scholarly databases including Google Scholar, arXiv, IEEE Xplore, ACM Digital Library, and SpringerLink, covering publications from January 2019 to March 2024. We investigate how LLMs encode and use knowledge via ICL, the evolving reasoning skills that result from this process, and the considerable impact of prompt design on LLM reasoning performance, particularly in symbolic reasoning tasks. Furthermore, we investigate the theoretical frameworks that explain or challenge LLM behaviors in ICL contexts and address the significance of these findings for the development of complex knowledge representation and reasoning systems. Using a systematic methodology consistent with accepted research criteria, this review synthesizes significant observations, highlights existing gaps and obstacles, and discusses implications for future research and practice. Our goal is to connect theoretical ideas with actual advances in Artificial Intelligence, ultimately contributing to the continuing discussion about the capabilities and applications of LLMs in knowledge representation and reasoning.https://ieeexplore.ieee.org/document/11018434/Artificial intelligencein-context learninglarge language models (LLMs)knowledge representation reasoningadvanced AI modelsnatural language processing (NLP) |
| spellingShingle | Azza Mohamed Mohamed El Rashid Khaled Shaalan In-Context Learning in Large Language Models (LLMs): Mechanisms, Capabilities, and Implications for Advanced Knowledge Representation and Reasoning IEEE Access Artificial intelligence in-context learning large language models (LLMs) knowledge representation reasoning advanced AI models natural language processing (NLP) |
| title | In-Context Learning in Large Language Models (LLMs): Mechanisms, Capabilities, and Implications for Advanced Knowledge Representation and Reasoning |
| title_full | In-Context Learning in Large Language Models (LLMs): Mechanisms, Capabilities, and Implications for Advanced Knowledge Representation and Reasoning |
| title_fullStr | In-Context Learning in Large Language Models (LLMs): Mechanisms, Capabilities, and Implications for Advanced Knowledge Representation and Reasoning |
| title_full_unstemmed | In-Context Learning in Large Language Models (LLMs): Mechanisms, Capabilities, and Implications for Advanced Knowledge Representation and Reasoning |
| title_short | In-Context Learning in Large Language Models (LLMs): Mechanisms, Capabilities, and Implications for Advanced Knowledge Representation and Reasoning |
| title_sort | in context learning in large language models llms mechanisms capabilities and implications for advanced knowledge representation and reasoning |
| topic | Artificial intelligence in-context learning large language models (LLMs) knowledge representation reasoning advanced AI models natural language processing (NLP) |
| url | https://ieeexplore.ieee.org/document/11018434/ |
| work_keys_str_mv | AT azzamohamed incontextlearninginlargelanguagemodelsllmsmechanismscapabilitiesandimplicationsforadvancedknowledgerepresentationandreasoning AT mohamedelrashid incontextlearninginlargelanguagemodelsllmsmechanismscapabilitiesandimplicationsforadvancedknowledgerepresentationandreasoning AT khaledshaalan incontextlearninginlargelanguagemodelsllmsmechanismscapabilitiesandimplicationsforadvancedknowledgerepresentationandreasoning |