MixLVMM: A Mixture of Lightweight Vision Mamba Model for Enhancing Skin Lesion Segmentation Across High Tone Variability

Accurate skin lesion segmentation remains a critical challenge in automated dermatological diagnosis due to heterogeneous lesion presentations, ambiguous boundaries, imaging artifacts, and significant variability in skin and lesion tones across diverse populations. Current segmentation methods inade...

Full description

Saved in:
Bibliographic Details
Main Authors: Mohamed Lamine Allaoui, Mohand Said Allili
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11078245/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Accurate skin lesion segmentation remains a critical challenge in automated dermatological diagnosis due to heterogeneous lesion presentations, ambiguous boundaries, imaging artifacts, and significant variability in skin and lesion tones across diverse populations. Current segmentation methods inadequately address these multifaceted complexities, particularly failing to handle extreme tone variations that can lead to diagnostic bias. To address these limitations, we present the Mixture of Lightweight Vision Mamba Model (MixLVMM), a novel expert-based framework that enhances segmentation performance across high tone variability through specialized processing. Our approach employs a Siamese network with triplet loss as a gate mechanism to categorize lesions based on tonal characteristics, routing each image to specialized Vision Mamba Model (VMM) experts optimized for specific lesion categories. Each expert utilizes a U-shaped architecture incorporating Focused Vision Mamba blocks and Adaptive Salient Region Attention modules to capture lesion-specific features while maintaining computational efficiency. Comprehensive evaluation on ISIC and PH2 datasets demonstrates that MixLVMM achieves superior segmentation performance with an average Dice coefficient of 93%, surpassing state-of-the-art methods while maintaining efficiency with only 2.5M parameters. These results establish MixLVMM as a robust solution for addressing tone-related segmentation challenges in clinical dermatology, offering both high accuracy and practical deployment feasibility for real-world applications. Additional materials and code will be available at <uri>https://github.com/MOHAMEDLamine77/MixLVMM</uri>
ISSN:2169-3536