Pediatric BurnNet: Robust multi-class segmentation and severity recognition under real-world imaging conditions

Objective: To establish and validate a deep learning model that simultaneously segments pediatric burn wounds and grades burn depth under complex, real-world imaging conditions. Methods: We retrospectively collected 4785 smartphone or camera photographs from hospitalized children over 5 years and an...

Full description

Saved in:
Bibliographic Details
Main Authors: Xiang Li, Zhen Liu, Lei Liu
Format: Article
Language:English
Published: SAGE Publishing 2025-07-01
Series:SAGE Open Medicine
Online Access:https://doi.org/10.1177/20503121251360090
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849417292661653504
author Xiang Li
Zhen Liu
Lei Liu
author_facet Xiang Li
Zhen Liu
Lei Liu
author_sort Xiang Li
collection DOAJ
description Objective: To establish and validate a deep learning model that simultaneously segments pediatric burn wounds and grades burn depth under complex, real-world imaging conditions. Methods: We retrospectively collected 4785 smartphone or camera photographs from hospitalized children over 5 years and annotated 14,355 burn regions as superficial second-degree, deep second-degree, or third-degree. Images were resized to 256 × 256 pixels and augmented by flipping and random rotation. A DeepLabv3 network with a ResNet101 backbone was enhanced with channel- and spatial attention modules, dropout-reinforced Atrous Spatial Pyramid Pooling, and a weighted cross-entropy loss to counter class imbalance. Ten-fold cross-validation (60 epochs, batch size 8) was performed using the Adam optimizer (learning rate 1 × 10⁻⁴). Results: The proposed Deep Fusion Network (attention-enhanced DeepLabv3-ResNet101, Dfusion) model achieved a mean segmentation Dice coefficient of 0.8766 ± 0.012 and an intersection-over-union of 0.8052 ± 0.015. Classification results demonstrated an accuracy of 97.65%, precision of 88.26%, recall of 86.76%, and an F1-score of 85.33%. Receiver operating characteristic curve analysis yielded area under the curve values of 0.82 for superficial second-degree, 0.76 for deep second-degree, and 0.78 for third-degree burns. Compared with baseline DeepLabv3, FCN-ResNet101, U-Net-ResNet101, and MobileNet models, Dfusion improved Dice by 15.2%–19.7% and intersection-over-union by 14.9%–23.5% (all p  < 0.01). Inference speed was 0.38 ± 0.03 s per image on an NVIDIA GTX 1060 GPU, highlighting the modest computational demands suitable for mobile deployment. Conclusion: Dfusion provides accurate, end-to-end segmentation and depth grading of pediatric burn wounds captured in uncontrolled environments. Its robust performance and modest computational demand support deployment on mobile devices, offering rapid, objective assistance for clinicians in resource-limited settings and enabling more precise triage and treatment planning for pediatric burn care.
format Article
id doaj-art-e3a5436b396244a7af26029973571fb7
institution Kabale University
issn 2050-3121
language English
publishDate 2025-07-01
publisher SAGE Publishing
record_format Article
series SAGE Open Medicine
spelling doaj-art-e3a5436b396244a7af26029973571fb72025-08-20T03:32:54ZengSAGE PublishingSAGE Open Medicine2050-31212025-07-011310.1177/20503121251360090Pediatric BurnNet: Robust multi-class segmentation and severity recognition under real-world imaging conditionsXiang Li0Zhen Liu1Lei Liu2Department of Burns and Plastic Surgery, National Center for Children’s Health, Beijing Children’s Hospital, Capital Medical University, ChinaDepartment of Burns and Plastic Surgery, National Center for Children’s Health, Beijing Children’s Hospital, Capital Medical University, ChinaDepartment of Burns and Plastic Surgery, National Center for Children’s Health, Beijing Children’s Hospital, Capital Medical University, ChinaObjective: To establish and validate a deep learning model that simultaneously segments pediatric burn wounds and grades burn depth under complex, real-world imaging conditions. Methods: We retrospectively collected 4785 smartphone or camera photographs from hospitalized children over 5 years and annotated 14,355 burn regions as superficial second-degree, deep second-degree, or third-degree. Images were resized to 256 × 256 pixels and augmented by flipping and random rotation. A DeepLabv3 network with a ResNet101 backbone was enhanced with channel- and spatial attention modules, dropout-reinforced Atrous Spatial Pyramid Pooling, and a weighted cross-entropy loss to counter class imbalance. Ten-fold cross-validation (60 epochs, batch size 8) was performed using the Adam optimizer (learning rate 1 × 10⁻⁴). Results: The proposed Deep Fusion Network (attention-enhanced DeepLabv3-ResNet101, Dfusion) model achieved a mean segmentation Dice coefficient of 0.8766 ± 0.012 and an intersection-over-union of 0.8052 ± 0.015. Classification results demonstrated an accuracy of 97.65%, precision of 88.26%, recall of 86.76%, and an F1-score of 85.33%. Receiver operating characteristic curve analysis yielded area under the curve values of 0.82 for superficial second-degree, 0.76 for deep second-degree, and 0.78 for third-degree burns. Compared with baseline DeepLabv3, FCN-ResNet101, U-Net-ResNet101, and MobileNet models, Dfusion improved Dice by 15.2%–19.7% and intersection-over-union by 14.9%–23.5% (all p  < 0.01). Inference speed was 0.38 ± 0.03 s per image on an NVIDIA GTX 1060 GPU, highlighting the modest computational demands suitable for mobile deployment. Conclusion: Dfusion provides accurate, end-to-end segmentation and depth grading of pediatric burn wounds captured in uncontrolled environments. Its robust performance and modest computational demand support deployment on mobile devices, offering rapid, objective assistance for clinicians in resource-limited settings and enabling more precise triage and treatment planning for pediatric burn care.https://doi.org/10.1177/20503121251360090
spellingShingle Xiang Li
Zhen Liu
Lei Liu
Pediatric BurnNet: Robust multi-class segmentation and severity recognition under real-world imaging conditions
SAGE Open Medicine
title Pediatric BurnNet: Robust multi-class segmentation and severity recognition under real-world imaging conditions
title_full Pediatric BurnNet: Robust multi-class segmentation and severity recognition under real-world imaging conditions
title_fullStr Pediatric BurnNet: Robust multi-class segmentation and severity recognition under real-world imaging conditions
title_full_unstemmed Pediatric BurnNet: Robust multi-class segmentation and severity recognition under real-world imaging conditions
title_short Pediatric BurnNet: Robust multi-class segmentation and severity recognition under real-world imaging conditions
title_sort pediatric burnnet robust multi class segmentation and severity recognition under real world imaging conditions
url https://doi.org/10.1177/20503121251360090
work_keys_str_mv AT xiangli pediatricburnnetrobustmulticlasssegmentationandseverityrecognitionunderrealworldimagingconditions
AT zhenliu pediatricburnnetrobustmulticlasssegmentationandseverityrecognitionunderrealworldimagingconditions
AT leiliu pediatricburnnetrobustmulticlasssegmentationandseverityrecognitionunderrealworldimagingconditions