Auto-Segmentation via deep-learning approaches for the assessment of flap volume after reconstructive surgery or radiotherapy in head and neck cancer
Abstract Reconstructive flap surgery aims to restore the substance and function losses associated with tumor resection. Automatic flap segmentation could allow quantification of flap volume and correlations with functional outcomes after surgery or post-operative RT (poRT). Flaps being ectopic tissu...
Saved in:
| Main Authors: | , , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-08073-4 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849238524360916992 |
|---|---|
| author | Juliette Thariat Zacharia Mesbah Youssef Chahir Arnaud Beddok Alice Blache Jean Bourhis Abir Fatallah Mathieu Hatt Romain Modzelewski |
| author_facet | Juliette Thariat Zacharia Mesbah Youssef Chahir Arnaud Beddok Alice Blache Jean Bourhis Abir Fatallah Mathieu Hatt Romain Modzelewski |
| author_sort | Juliette Thariat |
| collection | DOAJ |
| description | Abstract Reconstructive flap surgery aims to restore the substance and function losses associated with tumor resection. Automatic flap segmentation could allow quantification of flap volume and correlations with functional outcomes after surgery or post-operative RT (poRT). Flaps being ectopic tissues of various components (fat, skin, fascia, muscle, bone) of various volume, shape and texture, the anatomical modifications, inflammation and edema of the postoperative bed make the segmentation task challenging. We built a artificial intelligence-enabled automatic soft-tissue flap segmentation method from CT scans of Head and Neck Cancer (HNC) patients. Ground-truth flap segmentation masks were delineated by two experts on postoperative CT scans of 148 HNC patients undergoing poRT. All CTs and flaps (free or pedicled, soft tissue only or bone) were kept, including those with artefacts, to ensure generalizability. A deep-learning nnUNetv2 framework was built using Hounsfield Units (HU) windowing to mimic radiological assessment. A transformer-based 2D “Segment Anything Model” (MedSAM) was also built and fine-tuned to medical CTs. Models were compared with the Dice Similarity Coefficient (DSC) and Hausdorff Distance 95th percentile (HD95) metrics. Flaps were in the oral cavity (N = 102), oropharynx (N = 26) or larynx/hypopharynx (N = 20). There were free flaps (N = 137), pedicled flaps (N = 11), of soft tissue flap-only (N = 92), reconstructed bone (N = 42), or bone resected without reconstruction (N = 40). The nnUNet-windowing model outperformed the nnUNetv2 and MedSam models. It achieved mean DSCs of 0.69 and HD95 of 25.6 mm using 5-fold cross-validation. Segmentation performed better in the absence of artifacts, and rare situations such as pedicled flaps, laryngeal primaries and resected bone without bone reconstruction (p < 0.01). Automatic flap segmentation demonstrates clinical performances that allow to quantify spontaneous and radiation-induced volume shrinkage of flaps. Free flaps achieved excellent performances; rare situations will be addressed by fine-tuning the network. |
| format | Article |
| id | doaj-art-2ffec663b9cb4e368ab50d6ad0232335 |
| institution | Kabale University |
| issn | 2045-2322 |
| language | English |
| publishDate | 2025-07-01 |
| publisher | Nature Portfolio |
| record_format | Article |
| series | Scientific Reports |
| spelling | doaj-art-2ffec663b9cb4e368ab50d6ad02323352025-08-20T04:01:35ZengNature PortfolioScientific Reports2045-23222025-07-011511910.1038/s41598-025-08073-4Auto-Segmentation via deep-learning approaches for the assessment of flap volume after reconstructive surgery or radiotherapy in head and neck cancerJuliette Thariat0Zacharia Mesbah1Youssef Chahir2Arnaud Beddok3Alice Blache4Jean Bourhis5Abir Fatallah6Mathieu Hatt7Romain Modzelewski8Department of Radiotherapy, Centre François-BaclesseLITIS - UR4108 - Quantif, University of RouenImage Team GREYC-CNRS UMR, University of CaenInstitut Curie, PSL Research University, University Paris Saclay, Inserm LITO, U1288University HospitalGORTEC. 4bis rue Emile ZolaLaTIM, INSERM, UMR 1101, Univ BrestLaTIM, INSERM, UMR 1101, Univ BrestLITIS - UR4108 - Quantif, University of RouenAbstract Reconstructive flap surgery aims to restore the substance and function losses associated with tumor resection. Automatic flap segmentation could allow quantification of flap volume and correlations with functional outcomes after surgery or post-operative RT (poRT). Flaps being ectopic tissues of various components (fat, skin, fascia, muscle, bone) of various volume, shape and texture, the anatomical modifications, inflammation and edema of the postoperative bed make the segmentation task challenging. We built a artificial intelligence-enabled automatic soft-tissue flap segmentation method from CT scans of Head and Neck Cancer (HNC) patients. Ground-truth flap segmentation masks were delineated by two experts on postoperative CT scans of 148 HNC patients undergoing poRT. All CTs and flaps (free or pedicled, soft tissue only or bone) were kept, including those with artefacts, to ensure generalizability. A deep-learning nnUNetv2 framework was built using Hounsfield Units (HU) windowing to mimic radiological assessment. A transformer-based 2D “Segment Anything Model” (MedSAM) was also built and fine-tuned to medical CTs. Models were compared with the Dice Similarity Coefficient (DSC) and Hausdorff Distance 95th percentile (HD95) metrics. Flaps were in the oral cavity (N = 102), oropharynx (N = 26) or larynx/hypopharynx (N = 20). There were free flaps (N = 137), pedicled flaps (N = 11), of soft tissue flap-only (N = 92), reconstructed bone (N = 42), or bone resected without reconstruction (N = 40). The nnUNet-windowing model outperformed the nnUNetv2 and MedSam models. It achieved mean DSCs of 0.69 and HD95 of 25.6 mm using 5-fold cross-validation. Segmentation performed better in the absence of artifacts, and rare situations such as pedicled flaps, laryngeal primaries and resected bone without bone reconstruction (p < 0.01). Automatic flap segmentation demonstrates clinical performances that allow to quantify spontaneous and radiation-induced volume shrinkage of flaps. Free flaps achieved excellent performances; rare situations will be addressed by fine-tuning the network.https://doi.org/10.1038/s41598-025-08073-4Head and neck neoplasmsSurgeryFlapVolumeNeural networks (Computer) |
| spellingShingle | Juliette Thariat Zacharia Mesbah Youssef Chahir Arnaud Beddok Alice Blache Jean Bourhis Abir Fatallah Mathieu Hatt Romain Modzelewski Auto-Segmentation via deep-learning approaches for the assessment of flap volume after reconstructive surgery or radiotherapy in head and neck cancer Scientific Reports Head and neck neoplasms Surgery Flap Volume Neural networks (Computer) |
| title | Auto-Segmentation via deep-learning approaches for the assessment of flap volume after reconstructive surgery or radiotherapy in head and neck cancer |
| title_full | Auto-Segmentation via deep-learning approaches for the assessment of flap volume after reconstructive surgery or radiotherapy in head and neck cancer |
| title_fullStr | Auto-Segmentation via deep-learning approaches for the assessment of flap volume after reconstructive surgery or radiotherapy in head and neck cancer |
| title_full_unstemmed | Auto-Segmentation via deep-learning approaches for the assessment of flap volume after reconstructive surgery or radiotherapy in head and neck cancer |
| title_short | Auto-Segmentation via deep-learning approaches for the assessment of flap volume after reconstructive surgery or radiotherapy in head and neck cancer |
| title_sort | auto segmentation via deep learning approaches for the assessment of flap volume after reconstructive surgery or radiotherapy in head and neck cancer |
| topic | Head and neck neoplasms Surgery Flap Volume Neural networks (Computer) |
| url | https://doi.org/10.1038/s41598-025-08073-4 |
| work_keys_str_mv | AT juliettethariat autosegmentationviadeeplearningapproachesfortheassessmentofflapvolumeafterreconstructivesurgeryorradiotherapyinheadandneckcancer AT zachariamesbah autosegmentationviadeeplearningapproachesfortheassessmentofflapvolumeafterreconstructivesurgeryorradiotherapyinheadandneckcancer AT youssefchahir autosegmentationviadeeplearningapproachesfortheassessmentofflapvolumeafterreconstructivesurgeryorradiotherapyinheadandneckcancer AT arnaudbeddok autosegmentationviadeeplearningapproachesfortheassessmentofflapvolumeafterreconstructivesurgeryorradiotherapyinheadandneckcancer AT aliceblache autosegmentationviadeeplearningapproachesfortheassessmentofflapvolumeafterreconstructivesurgeryorradiotherapyinheadandneckcancer AT jeanbourhis autosegmentationviadeeplearningapproachesfortheassessmentofflapvolumeafterreconstructivesurgeryorradiotherapyinheadandneckcancer AT abirfatallah autosegmentationviadeeplearningapproachesfortheassessmentofflapvolumeafterreconstructivesurgeryorradiotherapyinheadandneckcancer AT mathieuhatt autosegmentationviadeeplearningapproachesfortheassessmentofflapvolumeafterreconstructivesurgeryorradiotherapyinheadandneckcancer AT romainmodzelewski autosegmentationviadeeplearningapproachesfortheassessmentofflapvolumeafterreconstructivesurgeryorradiotherapyinheadandneckcancer |