Uncertainty Quantification in Data Fusion Classifier for Ship-Wake Detection

Using deep learning model predictions requires not only understanding the model’s confidence but also its uncertainty, so we know when to trust the prediction or require support from a human. In this study, we used Monte Carlo dropout (MCDO) to characterize the uncertainty of deep learning image cla...

Full description

Saved in:
Bibliographic Details
Main Authors: Maice Costa, Daniel Sobien, Ria Garg, Winnie Cheung, Justin Krometis, Justin A. Kauffman
Format: Article
Language:English
Published: MDPI AG 2024-12-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/16/24/4669
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850059993862111232
author Maice Costa
Daniel Sobien
Ria Garg
Winnie Cheung
Justin Krometis
Justin A. Kauffman
author_facet Maice Costa
Daniel Sobien
Ria Garg
Winnie Cheung
Justin Krometis
Justin A. Kauffman
author_sort Maice Costa
collection DOAJ
description Using deep learning model predictions requires not only understanding the model’s confidence but also its uncertainty, so we know when to trust the prediction or require support from a human. In this study, we used Monte Carlo dropout (MCDO) to characterize the uncertainty of deep learning image classification algorithms, including feature fusion models, on simulated synthetic aperture radar (SAR) images of persistent ship wakes. Comparing to a baseline, we used the distribution of predictions from dropout with simple mean value ensembling and the Kolmogorov—Smirnov (KS) test to classify in-domain and out-of-domain (OOD) test samples, created by rotating images to angles not present in the training data. Our objective was to improve the classification robustness and identify OOD images during the test time. The mean value ensembling did not improve the performance over the baseline, in that there was a –1.05% difference in the Matthews correlation coefficient (MCC) from the baseline model averaged across all SAR bands. The KS test, by contrast, saw an improvement of +12.5% difference in MCC and was able to identify the majority of OOD samples. Leveraging the full distribution of predictions improved the classification robustness and allowed labeling test images as OOD. The feature fusion models, however, did not improve the performance over the single SAR-band models, demonstrating that it is best to rely on the highest quality data source available (in our case, C-band).
format Article
id doaj-art-af0aaf98c70f4f6b841d2e6c44909042
institution DOAJ
issn 2072-4292
language English
publishDate 2024-12-01
publisher MDPI AG
record_format Article
series Remote Sensing
spelling doaj-art-af0aaf98c70f4f6b841d2e6c449090422025-08-20T02:50:43ZengMDPI AGRemote Sensing2072-42922024-12-011624466910.3390/rs16244669Uncertainty Quantification in Data Fusion Classifier for Ship-Wake DetectionMaice Costa0Daniel Sobien1Ria Garg2Winnie Cheung3Justin Krometis4Justin A. Kauffman5National Security Institute, Virginia Tech, Arlington, VA 22203, USANational Security Institute, Virginia Tech, Arlington, VA 22203, USADepartment of Aerospace and Ocean Engineering, Virginia Tech, Blacksburg, VA 24061, USADepartment of Aerospace and Ocean Engineering, Virginia Tech, Blacksburg, VA 24061, USANational Security Institute, Virginia Tech, Blacksburg, VA 24060, USANational Security Institute, Virginia Tech, Arlington, VA 22203, USAUsing deep learning model predictions requires not only understanding the model’s confidence but also its uncertainty, so we know when to trust the prediction or require support from a human. In this study, we used Monte Carlo dropout (MCDO) to characterize the uncertainty of deep learning image classification algorithms, including feature fusion models, on simulated synthetic aperture radar (SAR) images of persistent ship wakes. Comparing to a baseline, we used the distribution of predictions from dropout with simple mean value ensembling and the Kolmogorov—Smirnov (KS) test to classify in-domain and out-of-domain (OOD) test samples, created by rotating images to angles not present in the training data. Our objective was to improve the classification robustness and identify OOD images during the test time. The mean value ensembling did not improve the performance over the baseline, in that there was a –1.05% difference in the Matthews correlation coefficient (MCC) from the baseline model averaged across all SAR bands. The KS test, by contrast, saw an improvement of +12.5% difference in MCC and was able to identify the majority of OOD samples. Leveraging the full distribution of predictions improved the classification robustness and allowed labeling test images as OOD. The feature fusion models, however, did not improve the performance over the single SAR-band models, demonstrating that it is best to rely on the highest quality data source available (in our case, C-band).https://www.mdpi.com/2072-4292/16/24/4669data fusionuncertainty quantificationsimulated datadeep neural networksynthetic aperture radar
spellingShingle Maice Costa
Daniel Sobien
Ria Garg
Winnie Cheung
Justin Krometis
Justin A. Kauffman
Uncertainty Quantification in Data Fusion Classifier for Ship-Wake Detection
Remote Sensing
data fusion
uncertainty quantification
simulated data
deep neural network
synthetic aperture radar
title Uncertainty Quantification in Data Fusion Classifier for Ship-Wake Detection
title_full Uncertainty Quantification in Data Fusion Classifier for Ship-Wake Detection
title_fullStr Uncertainty Quantification in Data Fusion Classifier for Ship-Wake Detection
title_full_unstemmed Uncertainty Quantification in Data Fusion Classifier for Ship-Wake Detection
title_short Uncertainty Quantification in Data Fusion Classifier for Ship-Wake Detection
title_sort uncertainty quantification in data fusion classifier for ship wake detection
topic data fusion
uncertainty quantification
simulated data
deep neural network
synthetic aperture radar
url https://www.mdpi.com/2072-4292/16/24/4669
work_keys_str_mv AT maicecosta uncertaintyquantificationindatafusionclassifierforshipwakedetection
AT danielsobien uncertaintyquantificationindatafusionclassifierforshipwakedetection
AT riagarg uncertaintyquantificationindatafusionclassifierforshipwakedetection
AT winniecheung uncertaintyquantificationindatafusionclassifierforshipwakedetection
AT justinkrometis uncertaintyquantificationindatafusionclassifierforshipwakedetection
AT justinakauffman uncertaintyquantificationindatafusionclassifierforshipwakedetection