A Modified Parallel NET (MPNET)-Based Deep Learning Technique for the Segmentation and Quantification of Visceral and Superficial Adipose Tissues of CT Scans

This study introduces a modified parallel net (MPNET), a novel deep learning model designed for accurate segmentation and quantification of visceral and superficial adipose tissues. This was used to quantify the visceral and superficial adipose tissues found at the L3 levels of vertebra in CT scans....

Full description

Saved in:
Bibliographic Details
Main Authors: Josteve Adekanbi, Debashish Das, Nouh Elmitwally, Aliyuda Ali, Vince I. Madai, Bahadar Bhatia, John Morlese, Muhammed Afsal, Tanay Shah
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10870231/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study introduces a modified parallel net (MPNET), a novel deep learning model designed for accurate segmentation and quantification of visceral and superficial adipose tissues. This was used to quantify the visceral and superficial adipose tissues found at the L3 levels of vertebra in CT scans. This will be used to predict the likelihood of the patient developing diabetes or cardiovascular diseases from existing CT scan data. MPNET was compared with state-of-the-art models like UNET, R2UNET, UNET++, and nnUNET. This approach advances the accuracy and efficiency of image segmentation demonstrating a faster learning curve and lower losses at early epochs than traditional models., We developed and validated using a limited dataset of 14 single-slice DICOM files for each patient extracted from the National Health Service UK. The outputs from MPNET not only matched but often exceeded traditional metrics such as the Dice coefficient and IoU in nuanced anatomical delineation, providing greater clinical realism and applicability in segmentation results. As a pilot study, this research paves the way for a forthcoming validation study on a larger and more ethnically diverse dataset.
ISSN:2169-3536