Large language models for closed-library multi-document query, test generation, and evaluation
IntroductionLearning complex, detailed, and evolving knowledge is a challenge in multiple technical professions. Relevant source knowledge is contained within many large documents and information sources with frequent updates to these documents. Knowledge tests need to be generated on new material a...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Frontiers Media S.A.
2025-08-01
|
| Series: | Frontiers in Artificial Intelligence |
| Subjects: | |
| Online Access: | https://www.frontiersin.org/articles/10.3389/frai.2025.1592013/full |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849236496472604672 |
|---|---|
| author | Claire Randolph Adam Michaleas Darrell O. Ricke |
| author_facet | Claire Randolph Adam Michaleas Darrell O. Ricke |
| author_sort | Claire Randolph |
| collection | DOAJ |
| description | IntroductionLearning complex, detailed, and evolving knowledge is a challenge in multiple technical professions. Relevant source knowledge is contained within many large documents and information sources with frequent updates to these documents. Knowledge tests need to be generated on new material and existing tests revised, tracking knowledge base updates. Large Language Models (LLMs) provide a framework for artificial intelligence-assisted knowledge acquisition and continued learning. Retrieval-Augmented Generation (RAG) provides a framework to leverage available, trained LLMs combined with technical area-specific knowledge bases.MethodsHerein, two methods are introduced (DaaDy: document as a dictionary and SQAD: structured question answer dictionary), which together enable effective implementation of LLM-RAG question-answering on large documents. Additionally, the AI for knowledge intensive tasks (AIKIT) solution is presented for working with numerous documents for training and continuing education. AIKIT is provided as a containerized open source solution that deploys on standalone, high performance, and cloud systems. AIKIT includes LLM, RAG, vector stores, relational database, and a Ruby on Rails web interface.ResultsCoverage of source documents by LLM-RAG generated questions decreases as the length of documents increase. Segmenting source documents improve coverage of generated questions. The AIKIT solution enabled easy use of multiple LLM models with multimodal RAG source documents; AIKIT retains LLM-RAG responses for queries against one or multiple LLM models.DiscussionAIKIT provides an easy-to-use set of tools to enable users to work with complex information using LLM-RAG capabilities. AIKIT enables easy use of multiple LLM models with retention of LLM-RAG responses. |
| format | Article |
| id | doaj-art-083d155f8f474989aceb2a660096ba2b |
| institution | Kabale University |
| issn | 2624-8212 |
| language | English |
| publishDate | 2025-08-01 |
| publisher | Frontiers Media S.A. |
| record_format | Article |
| series | Frontiers in Artificial Intelligence |
| spelling | doaj-art-083d155f8f474989aceb2a660096ba2b2025-08-20T04:02:13ZengFrontiers Media S.A.Frontiers in Artificial Intelligence2624-82122025-08-01810.3389/frai.2025.15920131592013Large language models for closed-library multi-document query, test generation, and evaluationClaire Randolph0Adam Michaleas1Darrell O. Ricke2Department of the Air Force, Artificial Intelligence Accelerator, Cambridge, MA, United StatesAI Technology, MIT Lincoln Laboratory, Lexington, MA, United StatesAI Technology, MIT Lincoln Laboratory, Lexington, MA, United StatesIntroductionLearning complex, detailed, and evolving knowledge is a challenge in multiple technical professions. Relevant source knowledge is contained within many large documents and information sources with frequent updates to these documents. Knowledge tests need to be generated on new material and existing tests revised, tracking knowledge base updates. Large Language Models (LLMs) provide a framework for artificial intelligence-assisted knowledge acquisition and continued learning. Retrieval-Augmented Generation (RAG) provides a framework to leverage available, trained LLMs combined with technical area-specific knowledge bases.MethodsHerein, two methods are introduced (DaaDy: document as a dictionary and SQAD: structured question answer dictionary), which together enable effective implementation of LLM-RAG question-answering on large documents. Additionally, the AI for knowledge intensive tasks (AIKIT) solution is presented for working with numerous documents for training and continuing education. AIKIT is provided as a containerized open source solution that deploys on standalone, high performance, and cloud systems. AIKIT includes LLM, RAG, vector stores, relational database, and a Ruby on Rails web interface.ResultsCoverage of source documents by LLM-RAG generated questions decreases as the length of documents increase. Segmenting source documents improve coverage of generated questions. The AIKIT solution enabled easy use of multiple LLM models with multimodal RAG source documents; AIKIT retains LLM-RAG responses for queries against one or multiple LLM models.DiscussionAIKIT provides an easy-to-use set of tools to enable users to work with complex information using LLM-RAG capabilities. AIKIT enables easy use of multiple LLM models with retention of LLM-RAG responses.https://www.frontiersin.org/articles/10.3389/frai.2025.1592013/fulllarge language modelsLLMretrieval-augmented generationRAGLangChain |
| spellingShingle | Claire Randolph Adam Michaleas Darrell O. Ricke Large language models for closed-library multi-document query, test generation, and evaluation Frontiers in Artificial Intelligence large language models LLM retrieval-augmented generation RAG LangChain |
| title | Large language models for closed-library multi-document query, test generation, and evaluation |
| title_full | Large language models for closed-library multi-document query, test generation, and evaluation |
| title_fullStr | Large language models for closed-library multi-document query, test generation, and evaluation |
| title_full_unstemmed | Large language models for closed-library multi-document query, test generation, and evaluation |
| title_short | Large language models for closed-library multi-document query, test generation, and evaluation |
| title_sort | large language models for closed library multi document query test generation and evaluation |
| topic | large language models LLM retrieval-augmented generation RAG LangChain |
| url | https://www.frontiersin.org/articles/10.3389/frai.2025.1592013/full |
| work_keys_str_mv | AT clairerandolph largelanguagemodelsforclosedlibrarymultidocumentquerytestgenerationandevaluation AT adammichaleas largelanguagemodelsforclosedlibrarymultidocumentquerytestgenerationandevaluation AT darrelloricke largelanguagemodelsforclosedlibrarymultidocumentquerytestgenerationandevaluation |