TransRNetFuse: a highly accurate and precise boundary FCN-transformer feature integration for medical image segmentation
Abstract Imaging examinations are integral to the diagnosis and treatment of cancer. Nevertheless, the intricate nature of medical images frequently necessitates that physicians follow time-consuming and potentially fallible diagnostic procedures. In response to these challenges, deep learning-based...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer
2025-03-01
|
| Series: | Complex & Intelligent Systems |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s40747-025-01847-3 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850042292101971968 |
|---|---|
| author | Baotian Li Jing Zhou Fangfang Gou Jia Wu |
| author_facet | Baotian Li Jing Zhou Fangfang Gou Jia Wu |
| author_sort | Baotian Li |
| collection | DOAJ |
| description | Abstract Imaging examinations are integral to the diagnosis and treatment of cancer. Nevertheless, the intricate nature of medical images frequently necessitates that physicians follow time-consuming and potentially fallible diagnostic procedures. In response to these challenges, deep learning-based image segmentation technology has emerged as a potent instrument for aiding physicians in navigating diagnostic complexities by extracting pivotal information from extensive sets of medical images. Nonetheless, the majority of existing models prioritize overall high accuracy, often overlooking the sensitivity to local salient features and the precision of segmentation boundaries. This oversight limits the full realization of the practical utility of deep learning models in clinical settings. This study introduces a novel pathological image segmentation method, termed TransRNetFuse, which incorporates stepwise feature aggregation and a residual fully convolutional network architecture. The objective of this method is to address the issues associated with the extraction of local key features and the accurate delineation of boundaries in medical image segmentation. The proposed model achieves enhanced overall performance by merging a fully convolutional network branch with a Transformer branch and utilizing residual blocks along with dense U-net skip connections. It prevents attentional dispersion by emphasizing local features, and further employs an automatic augmentation strategy to identify the optimal data augmentation scheme, which is particularly advantageous for small-sample datasets. Furthermore, this paper introduces an edge enhancement loss function to enhance the model's sensitivity to tumor boundaries. A dataset comprising 2164 pathological images, provided by Hunan Medical University General Hospital, was utilized for model training. The experimental results indicate that the proposed method outperforms existing techniques, such as MedT, in terms of both accuracy and edge precision, thereby demonstrating its significant potential for application in the medical field. Code: https://github.com/GFF1228/-TransRNetFuse.git . |
| format | Article |
| id | doaj-art-03b0855c02804ddeb5b6f2793c68dc0a |
| institution | DOAJ |
| issn | 2199-4536 2198-6053 |
| language | English |
| publishDate | 2025-03-01 |
| publisher | Springer |
| record_format | Article |
| series | Complex & Intelligent Systems |
| spelling | doaj-art-03b0855c02804ddeb5b6f2793c68dc0a2025-08-20T02:55:36ZengSpringerComplex & Intelligent Systems2199-45362198-60532025-03-0111511910.1007/s40747-025-01847-3TransRNetFuse: a highly accurate and precise boundary FCN-transformer feature integration for medical image segmentationBaotian Li0Jing Zhou1Fangfang Gou2Jia Wu3School of Information Engineering, Shandong Youth University of Political ScienceHunan University of Medicine General HospitalState Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou UniversityState Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou UniversityAbstract Imaging examinations are integral to the diagnosis and treatment of cancer. Nevertheless, the intricate nature of medical images frequently necessitates that physicians follow time-consuming and potentially fallible diagnostic procedures. In response to these challenges, deep learning-based image segmentation technology has emerged as a potent instrument for aiding physicians in navigating diagnostic complexities by extracting pivotal information from extensive sets of medical images. Nonetheless, the majority of existing models prioritize overall high accuracy, often overlooking the sensitivity to local salient features and the precision of segmentation boundaries. This oversight limits the full realization of the practical utility of deep learning models in clinical settings. This study introduces a novel pathological image segmentation method, termed TransRNetFuse, which incorporates stepwise feature aggregation and a residual fully convolutional network architecture. The objective of this method is to address the issues associated with the extraction of local key features and the accurate delineation of boundaries in medical image segmentation. The proposed model achieves enhanced overall performance by merging a fully convolutional network branch with a Transformer branch and utilizing residual blocks along with dense U-net skip connections. It prevents attentional dispersion by emphasizing local features, and further employs an automatic augmentation strategy to identify the optimal data augmentation scheme, which is particularly advantageous for small-sample datasets. Furthermore, this paper introduces an edge enhancement loss function to enhance the model's sensitivity to tumor boundaries. A dataset comprising 2164 pathological images, provided by Hunan Medical University General Hospital, was utilized for model training. The experimental results indicate that the proposed method outperforms existing techniques, such as MedT, in terms of both accuracy and edge precision, thereby demonstrating its significant potential for application in the medical field. Code: https://github.com/GFF1228/-TransRNetFuse.git .https://doi.org/10.1007/s40747-025-01847-3Artificial intelligencePathological image segmentationFCN-transformerEdge-enhancement |
| spellingShingle | Baotian Li Jing Zhou Fangfang Gou Jia Wu TransRNetFuse: a highly accurate and precise boundary FCN-transformer feature integration for medical image segmentation Complex & Intelligent Systems Artificial intelligence Pathological image segmentation FCN-transformer Edge-enhancement |
| title | TransRNetFuse: a highly accurate and precise boundary FCN-transformer feature integration for medical image segmentation |
| title_full | TransRNetFuse: a highly accurate and precise boundary FCN-transformer feature integration for medical image segmentation |
| title_fullStr | TransRNetFuse: a highly accurate and precise boundary FCN-transformer feature integration for medical image segmentation |
| title_full_unstemmed | TransRNetFuse: a highly accurate and precise boundary FCN-transformer feature integration for medical image segmentation |
| title_short | TransRNetFuse: a highly accurate and precise boundary FCN-transformer feature integration for medical image segmentation |
| title_sort | transrnetfuse a highly accurate and precise boundary fcn transformer feature integration for medical image segmentation |
| topic | Artificial intelligence Pathological image segmentation FCN-transformer Edge-enhancement |
| url | https://doi.org/10.1007/s40747-025-01847-3 |
| work_keys_str_mv | AT baotianli transrnetfuseahighlyaccurateandpreciseboundaryfcntransformerfeatureintegrationformedicalimagesegmentation AT jingzhou transrnetfuseahighlyaccurateandpreciseboundaryfcntransformerfeatureintegrationformedicalimagesegmentation AT fangfanggou transrnetfuseahighlyaccurateandpreciseboundaryfcntransformerfeatureintegrationformedicalimagesegmentation AT jiawu transrnetfuseahighlyaccurateandpreciseboundaryfcntransformerfeatureintegrationformedicalimagesegmentation |