A comprehensive evaluation of deep vision transformers for road extraction from very-high-resolution satellite data

Transformer-based semantic segmentation architectures excel in extracting road networks from very-high-resolution (VHR) satellite images due to their ability to capture global contextual information. Nonetheless, there is a gap in research regarding their comparative effectiveness, efficiency, and p...

Full description

Saved in:
Bibliographic Details
Main Authors: Jan Bolcek, Mohamed Barakat A. Gibril, Rami Al-Ruzouq, Abdallah Shanableh, Ratiranjan Jena, Nezar Hammouri, Mourtadha Sarhan Sachit, Omid Ghorbanzadeh
Format: Article
Language:English
Published: Elsevier 2025-06-01
Series:Science of Remote Sensing
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2666017224000749
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841561302348070912
author Jan Bolcek
Mohamed Barakat A. Gibril
Rami Al-Ruzouq
Abdallah Shanableh
Ratiranjan Jena
Nezar Hammouri
Mourtadha Sarhan Sachit
Omid Ghorbanzadeh
author_facet Jan Bolcek
Mohamed Barakat A. Gibril
Rami Al-Ruzouq
Abdallah Shanableh
Ratiranjan Jena
Nezar Hammouri
Mourtadha Sarhan Sachit
Omid Ghorbanzadeh
author_sort Jan Bolcek
collection DOAJ
description Transformer-based semantic segmentation architectures excel in extracting road networks from very-high-resolution (VHR) satellite images due to their ability to capture global contextual information. Nonetheless, there is a gap in research regarding their comparative effectiveness, efficiency, and performance in extracting road networks from multicity VHR data. This study evaluates 11 transformer-based models on three publicly available datasets (DeepGlobe Road Extraction Dataset, SpaceNet-3 Road Network Detection Dataset, and Massachusetts Road Dataset) to assess their performance, efficiency, and complexity in mapping road networks from multicity, multidate, and multisensory VHR optical satellite images. The evaluated models include Unified Perceptual Parsing for Scene Understanding (UperNet) based on the Swin transformer (UperNet-SwinT), and Multi-path Vision Transformer (UperNet-MpViT), Twins transformer, Segmenter, SegFormer, K-Net based on SwinT, Mask2Former based on SwinT (Mask2Former-SwinT), TopFormer, UniFormer, and PoolFormer. Results showed that the models recorded mean F-scores (mF-score) ranging from 82.22% to 90.70% for the DeepGlobe dataset, 58.98%–86.95% for the Massachusetts dataset, and 69.02%–86.14% for the SpaceNet-3 dataset. Mask2Former-SwinT, UperNet-MpViT, and SegFormer were the top performers among the evaluated models. The Mask2Former, based on the SwinT, demonstrated a strong balance of high performance across different satellite image datasets and moderate computational efficiency. This investigation aids in selecting the most suitable model for extracting road networks from remote sensing data.
format Article
id doaj-art-f3455d0ddd214250bba1fae55dfeccdf
institution Kabale University
issn 2666-0172
language English
publishDate 2025-06-01
publisher Elsevier
record_format Article
series Science of Remote Sensing
spelling doaj-art-f3455d0ddd214250bba1fae55dfeccdf2025-01-03T04:08:58ZengElsevierScience of Remote Sensing2666-01722025-06-0111100190A comprehensive evaluation of deep vision transformers for road extraction from very-high-resolution satellite dataJan Bolcek0Mohamed Barakat A. Gibril1Rami Al-Ruzouq2Abdallah Shanableh3Ratiranjan Jena4Nezar Hammouri5Mourtadha Sarhan Sachit6Omid Ghorbanzadeh7GIS and Remote Sensing Center, Research Institute of Sciences and Engineering, University of Sharjah, Sharjah 27272, United Arab Emirates; Department of Radio Electronics, Faculty of Electrical Engineering and Communication, Brno University of Technology, Brno-Kralovo pole, 61600 Czech RepublicGIS and Remote Sensing Center, Research Institute of Sciences and Engineering, University of Sharjah, Sharjah 27272, United Arab EmiratesGIS and Remote Sensing Center, Research Institute of Sciences and Engineering, University of Sharjah, Sharjah 27272, United Arab EmiratesGIS and Remote Sensing Center, Research Institute of Sciences and Engineering, University of Sharjah, Sharjah 27272, United Arab Emirates; Scientific Research Center, Australian University, KuwaitGIS and Remote Sensing Center, Research Institute of Sciences and Engineering, University of Sharjah, Sharjah 27272, United Arab EmiratesGIS and Remote Sensing Center, Research Institute of Sciences and Engineering, University of Sharjah, Sharjah 27272, United Arab EmiratesDepartment of Civil Engineering, College of Engineering, Universiti of Thi-Qar, 64001, Nasiriyah, Thi-Qar, IraqInstitute of Geomatics, University of Natural Resources and Life Sciences, Peter-Jordan Strasse 82,1190 Vienna, Austria; Corresponding author.Transformer-based semantic segmentation architectures excel in extracting road networks from very-high-resolution (VHR) satellite images due to their ability to capture global contextual information. Nonetheless, there is a gap in research regarding their comparative effectiveness, efficiency, and performance in extracting road networks from multicity VHR data. This study evaluates 11 transformer-based models on three publicly available datasets (DeepGlobe Road Extraction Dataset, SpaceNet-3 Road Network Detection Dataset, and Massachusetts Road Dataset) to assess their performance, efficiency, and complexity in mapping road networks from multicity, multidate, and multisensory VHR optical satellite images. The evaluated models include Unified Perceptual Parsing for Scene Understanding (UperNet) based on the Swin transformer (UperNet-SwinT), and Multi-path Vision Transformer (UperNet-MpViT), Twins transformer, Segmenter, SegFormer, K-Net based on SwinT, Mask2Former based on SwinT (Mask2Former-SwinT), TopFormer, UniFormer, and PoolFormer. Results showed that the models recorded mean F-scores (mF-score) ranging from 82.22% to 90.70% for the DeepGlobe dataset, 58.98%–86.95% for the Massachusetts dataset, and 69.02%–86.14% for the SpaceNet-3 dataset. Mask2Former-SwinT, UperNet-MpViT, and SegFormer were the top performers among the evaluated models. The Mask2Former, based on the SwinT, demonstrated a strong balance of high performance across different satellite image datasets and moderate computational efficiency. This investigation aids in selecting the most suitable model for extracting road networks from remote sensing data.http://www.sciencedirect.com/science/article/pii/S2666017224000749Remote sensingRoad extractionSatellite dataSemantic segmentationVision Transformers
spellingShingle Jan Bolcek
Mohamed Barakat A. Gibril
Rami Al-Ruzouq
Abdallah Shanableh
Ratiranjan Jena
Nezar Hammouri
Mourtadha Sarhan Sachit
Omid Ghorbanzadeh
A comprehensive evaluation of deep vision transformers for road extraction from very-high-resolution satellite data
Science of Remote Sensing
Remote sensing
Road extraction
Satellite data
Semantic segmentation
Vision Transformers
title A comprehensive evaluation of deep vision transformers for road extraction from very-high-resolution satellite data
title_full A comprehensive evaluation of deep vision transformers for road extraction from very-high-resolution satellite data
title_fullStr A comprehensive evaluation of deep vision transformers for road extraction from very-high-resolution satellite data
title_full_unstemmed A comprehensive evaluation of deep vision transformers for road extraction from very-high-resolution satellite data
title_short A comprehensive evaluation of deep vision transformers for road extraction from very-high-resolution satellite data
title_sort comprehensive evaluation of deep vision transformers for road extraction from very high resolution satellite data
topic Remote sensing
Road extraction
Satellite data
Semantic segmentation
Vision Transformers
url http://www.sciencedirect.com/science/article/pii/S2666017224000749
work_keys_str_mv AT janbolcek acomprehensiveevaluationofdeepvisiontransformersforroadextractionfromveryhighresolutionsatellitedata
AT mohamedbarakatagibril acomprehensiveevaluationofdeepvisiontransformersforroadextractionfromveryhighresolutionsatellitedata
AT ramialruzouq acomprehensiveevaluationofdeepvisiontransformersforroadextractionfromveryhighresolutionsatellitedata
AT abdallahshanableh acomprehensiveevaluationofdeepvisiontransformersforroadextractionfromveryhighresolutionsatellitedata
AT ratiranjanjena acomprehensiveevaluationofdeepvisiontransformersforroadextractionfromveryhighresolutionsatellitedata
AT nezarhammouri acomprehensiveevaluationofdeepvisiontransformersforroadextractionfromveryhighresolutionsatellitedata
AT mourtadhasarhansachit acomprehensiveevaluationofdeepvisiontransformersforroadextractionfromveryhighresolutionsatellitedata
AT omidghorbanzadeh acomprehensiveevaluationofdeepvisiontransformersforroadextractionfromveryhighresolutionsatellitedata
AT janbolcek comprehensiveevaluationofdeepvisiontransformersforroadextractionfromveryhighresolutionsatellitedata
AT mohamedbarakatagibril comprehensiveevaluationofdeepvisiontransformersforroadextractionfromveryhighresolutionsatellitedata
AT ramialruzouq comprehensiveevaluationofdeepvisiontransformersforroadextractionfromveryhighresolutionsatellitedata
AT abdallahshanableh comprehensiveevaluationofdeepvisiontransformersforroadextractionfromveryhighresolutionsatellitedata
AT ratiranjanjena comprehensiveevaluationofdeepvisiontransformersforroadextractionfromveryhighresolutionsatellitedata
AT nezarhammouri comprehensiveevaluationofdeepvisiontransformersforroadextractionfromveryhighresolutionsatellitedata
AT mourtadhasarhansachit comprehensiveevaluationofdeepvisiontransformersforroadextractionfromveryhighresolutionsatellitedata
AT omidghorbanzadeh comprehensiveevaluationofdeepvisiontransformersforroadextractionfromveryhighresolutionsatellitedata