A 3D semantic segmentation network for accurate neuronal soma segmentation

Neuronal soma segmentation plays a crucial role in neuroscience applications. However, the fine structure, such as boundaries, small-volume neuronal somata and fibers, are commonly present in cell images, which pose a challenge for accurate segmentation. In this paper, we propose a 3D semantic segme...

Full description

Saved in:
Bibliographic Details
Main Authors: Li Ma, Qi Zhong, Yezi Wang, Xiaoquan Yang, Qian Du
Format: Article
Language:English
Published: World Scientific Publishing 2025-01-01
Series:Journal of Innovative Optical Health Sciences
Subjects:
Online Access:https://www.worldscientific.com/doi/10.1142/S1793545824500184
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Neuronal soma segmentation plays a crucial role in neuroscience applications. However, the fine structure, such as boundaries, small-volume neuronal somata and fibers, are commonly present in cell images, which pose a challenge for accurate segmentation. In this paper, we propose a 3D semantic segmentation network for neuronal soma segmentation to address this issue. Using an encoding-decoding structure, we introduce a Multi-Scale feature extraction and Adaptive Weighting fusion module (MSAW) after each encoding block. The MSAW module can not only emphasize the fine structures via an upsampling strategy, but also provide pixel-wise weights to measure the importance of the multi-scale features. Additionally, a dynamic convolution instead of normal convolution is employed to better adapt the network to input data with different distributions. The proposed MSAW-based semantic segmentation network (MSAW-Net) was evaluated on three neuronal soma images from mouse brain and one neuronal soma image from macaque brain, demonstrating the efficiency of the proposed method. It achieved an F1 score of 91.8% on Fezf2-2A-CreER dataset, 97.1% on LSL-H2B-GFP dataset, 82.8% on Thy1-EGFP-Mline dataset, and 86.9% on macaque dataset, achieving improvements over the 3D U-Net model by 3.1%, 3.3%, 3.9%, and 2.3%, respectively.
ISSN:1793-5458
1793-7205