PRISM-Med: Parameter-Efficient Robust Interdomain Specialty Model for Medical Language Tasks

Language Models (LMs) have shown remarkable potential in healthcare applications, yet their widespread adoption faces challenges in achieving consistent performance across diverse medical specialties while maintaining parameter efficiency. Current approaches to fine-tuning language models for medica...

Full description

Saved in:
Bibliographic Details
Main Authors: Jieui Kang, Hyungon Ryu, Jaehyeong Sim
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10820505/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Language Models (LMs) have shown remarkable potential in healthcare applications, yet their widespread adoption faces challenges in achieving consistent performance across diverse medical specialties while maintaining parameter efficiency. Current approaches to fine-tuning language models for medical tasks often require extensive computational resources and struggle with managing specialized medical knowledge across different domains. To address these challenges, we present PRISM-Med (Parameter-efficient Robust Interdomain Specialty Model), a novel framework that enhances domain-specific performance through supervised domain classification and specialized adaptation. Our framework introduces three key innovations: <xref ref-type="disp-formula" rid="deqn1">(1)</xref> a domain detection model that accurately classifies medical text into specific medical domains using supervised learning, <xref ref-type="disp-formula" rid="deqn2">(2)</xref> a domain-specific Low-Rank Adaptation (LoRA) strategy that enables efficient parameter utilization while preserving specialized knowledge, and <xref ref-type="disp-formula" rid="deqn3">(3)</xref> a neural domain detector that dynamically selects the most relevant domain-specific models during inference. Through comprehensive empirical evaluation across multiple medical benchmarks (MedProb, MedNER, MedQuAD), we demonstrate that PRISM-Med achieves consistent performance improvements, with gains of up to 10.1% in medical QA tasks and 2.7% in medical knowledge evaluation compared to traditional fine-tuning baselines. Notably, our framework achieves these improvements while using only 0.1% to 0.4% of the parameters required for traditional fine-tuning approaches. PRISM-Med represents a significant advancement in developing efficient and robust medical language models, providing a practical solution for specialized medical applications where both performance and computational efficiency are crucial.
ISSN:2169-3536