TransRNetFuse: a highly accurate and precise boundary FCN-transformer feature integration for medical image segmentation

Abstract Imaging examinations are integral to the diagnosis and treatment of cancer. Nevertheless, the intricate nature of medical images frequently necessitates that physicians follow time-consuming and potentially fallible diagnostic procedures. In response to these challenges, deep learning-based...

Full description

Saved in:
Bibliographic Details
Main Authors: Baotian Li, Jing Zhou, Fangfang Gou, Jia Wu
Format: Article
Language:English
Published: Springer 2025-03-01
Series:Complex & Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1007/s40747-025-01847-3
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Imaging examinations are integral to the diagnosis and treatment of cancer. Nevertheless, the intricate nature of medical images frequently necessitates that physicians follow time-consuming and potentially fallible diagnostic procedures. In response to these challenges, deep learning-based image segmentation technology has emerged as a potent instrument for aiding physicians in navigating diagnostic complexities by extracting pivotal information from extensive sets of medical images. Nonetheless, the majority of existing models prioritize overall high accuracy, often overlooking the sensitivity to local salient features and the precision of segmentation boundaries. This oversight limits the full realization of the practical utility of deep learning models in clinical settings. This study introduces a novel pathological image segmentation method, termed TransRNetFuse, which incorporates stepwise feature aggregation and a residual fully convolutional network architecture. The objective of this method is to address the issues associated with the extraction of local key features and the accurate delineation of boundaries in medical image segmentation. The proposed model achieves enhanced overall performance by merging a fully convolutional network branch with a Transformer branch and utilizing residual blocks along with dense U-net skip connections. It prevents attentional dispersion by emphasizing local features, and further employs an automatic augmentation strategy to identify the optimal data augmentation scheme, which is particularly advantageous for small-sample datasets. Furthermore, this paper introduces an edge enhancement loss function to enhance the model's sensitivity to tumor boundaries. A dataset comprising 2164 pathological images, provided by Hunan Medical University General Hospital, was utilized for model training. The experimental results indicate that the proposed method outperforms existing techniques, such as MedT, in terms of both accuracy and edge precision, thereby demonstrating its significant potential for application in the medical field. Code: https://github.com/GFF1228/-TransRNetFuse.git .
ISSN:2199-4536
2198-6053