A scalable framework for evaluating multiple language models through cross-domain generation and hallucination detection
Abstract Large language models (LLMs) have significantly advanced in recent years, greatly enhancing the capabilities of retrieval-augmented generation (RAG) systems. However, challenges such as semantic similarity, bias/sentiment, and hallucinations persist, especially in domain-specific applicatio...
Saved in:
| Main Authors: | Sorup Chakraborty, Rajesh Chowdhury, Sourov Roy Shuvo, Rajdeep Chatterjee, Satyabrata Roy |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-08-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-15203-5 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Hallucination Mitigation for Retrieval-Augmented Large Language Models: A Review
by: Wan Zhang, et al.
Published: (2025-03-01) -
Hybrid Multi-Agent GraphRAG for E-Government: Towards a Trustworthy AI Assistant
by: George Papageorgiou, et al.
Published: (2025-06-01) -
Retrieval-augmented generation for educational application: A systematic survey
by: Zongxi Li, et al.
Published: (2025-06-01) -
Systematic Analysis of Retrieval-Augmented Generation-Based LLMs for Medical Chatbot Applications
by: Arunabh Bora, et al.
Published: (2024-10-01) -
Reducing hallucinations of large language models via hierarchical semantic piece
by: Yanyi Liu, et al.
Published: (2025-04-01)