ResilioMate: A Resilient Multi-Agent Task Executing Framework for Enhancing Small Language Models

Recent advances in large language models (LLMs) have been limited by their processing requirements and vulnerability to adversarial assaults, whilst short language models (SLMs) struggle with performance consistency in complex tasks. This research introduces ResilioMate, a resilient multi-agent fram...

Full description

Saved in:
Bibliographic Details
Main Authors: Yubing Xiong, Mingrui Huang, Xuechen Liang, Meiling Tao
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10988534/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recent advances in large language models (LLMs) have been limited by their processing requirements and vulnerability to adversarial assaults, whilst short language models (SLMs) struggle with performance consistency in complex tasks. This research introduces ResilioMate, a resilient multi-agent framework that enhances SLMs by utilizing distributed cognitive burden distribution, dual-scale memory systems, and collaborative bias prevention strategies. The method employs dynamic task decomposition across specialized agents (e.g., Assistant, Checker) to minimize computational costs and combines short-term trajectory tracking with long-term self-reflective optimization for adaptive execution. At its core, the LeptoConnect model series (1.8B/7B parameters) is created using hybrid attention distillation and dynamic curriculum learning to enable cross-domain competence while retaining SLM efficiency. ResilioMate accomplishes three critical improvements: 1) The 1.8B LeptoConnect model attains 81.6% of GPT-4’s performance in knowledge graph construction through parameter-efficient fine-tuning with structured weight matrices; 2) LeptoConnect-7B achieves a score of 41.3 in database operations, compared to GPT-4’s 32.0, through collaborative cognitive load allocation; and 3) A bias-interception network effectively suppresses adversarial propagation while achieving code correction performance of ROUGE-L’s 42.86. The framework’s dual-scale memory architecture reduces computational redundancy by 26.4% through real-time task tracking and multi-agent knowledge refinement. These developments illustrate ResilioMate’s efficacy in bridging the performance gap between SLMs and LLMs, offering a scalable solution for the deployment of efficient language agents in real-time and peripheral computing environments.
ISSN:2169-3536