A Comparative Performance Analysis of Locally Deployed Large Language Models Through a Retrieval-Augmented Generation Educational Assistant Application for Textual Data Extraction

<b>Background:</b> Rapid advancements in large language models (LLMs) have significantly enhanced Retrieval-Augmented Generation (RAG) techniques, leading to more accurate and context-aware information retrieval systems. <b>Methods:</b> This article presents the creation of a...

Full description

Saved in:
Bibliographic Details
Main Authors: Amitabh Mishra, Nagaraju Brahmanapally
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:AI
Subjects:
Online Access:https://www.mdpi.com/2673-2688/6/6/119
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:<b>Background:</b> Rapid advancements in large language models (LLMs) have significantly enhanced Retrieval-Augmented Generation (RAG) techniques, leading to more accurate and context-aware information retrieval systems. <b>Methods:</b> This article presents the creation of a RAG-based chatbot tailored for university course catalogs, aimed at answering queries related to course details and other essential academic information, and investigates its performance by testing it on several locally deployed large language models. By leveraging multiple LLM architectures, we evaluate performance of the models under test in terms of context length, embedding size, computational efficiency, and relevance of responses. <b>Results:</b> The experimental analysis obtained by this research, which builds on recent comparative studies, reveals that while larger models achieve higher relevance scores, they incur greater response times than smaller, more efficient models. <b>Conclusions:</b> The findings underscore the importance of balancing accuracy and efficiency for real-time educational applications. Overall, this work contributes to the field by offering insights into optimal RAG configurations and practical guidelines for deploying AI-powered educational assistants.
ISSN:2673-2688