Enhancing LLM Reasoning Capabilities Through Brokered Multi-Expert Reflection
Large Language Models (LLMs) have found increasing application in tasks requiring multi-step reasoning, yet challenges such as hallucinations and inconsistencies in the generated responses persist. This study presents an innovative methodology to enhance the reasoning capabilities of LLMs by brokeri...
Saved in:
| Main Authors: | Tejasvee Sheokand, Garveet Jain, Arshdeep Bahga, Vijay K. Madisetti |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10966887/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Entropy-Guided KV Caching for Efficient LLM Inference
by: Heekyum Kim, et al.
Published: (2025-07-01) -
LLM Agentic Workflow for Automated Vulnerability Detection and Remediation in Infrastructure-as-Code
by: Dheer Toprani, et al.
Published: (2025-01-01) -
CoReaAgents: A Collaboration and Reasoning Framework Based on LLM-Powered Agents for Complex Reasoning Tasks
by: Zhonghe Han, et al.
Published: (2025-05-01) -
BALI—A Benchmark for Accelerated Language Model Inference
by: Lena Jurkschat, et al.
Published: (2025-01-01) -
A model of ensuring LLM cybersecurity
by: Oleksii Neretin, et al.
Published: (2025-05-01)