Towards Trustworthy AI: Analyzing Model Uncertainty through Monte Carlo Dropout and Noise Injection

Autonomous vehicles require intelligent computer vision (CV) to perform critical navigational perception tasks. To achieve this, sensors such as camera, LiDAR and radar are utilized to provide data to artificial intelligence (AI) systems. Continuous monitoring of these intelligent CV systems is req...

Full description

Saved in:
Bibliographic Details
Main Authors: Chern Chao Tai, Wesam Al Amiri, Abhijeet Solanki, Douglas Alan Talbert, Nan Guo, Syed Rafay Hasan
Format: Article
Language:English
Published: LibraryPress@UF 2025-05-01
Series:Proceedings of the International Florida Artificial Intelligence Research Society Conference
Online Access:https://journals.flvc.org/FLAIRS/article/view/138945
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850272206008877056
author Chern Chao Tai
Wesam Al Amiri
Abhijeet Solanki
Douglas Alan Talbert
Nan Guo
Syed Rafay Hasan
author_facet Chern Chao Tai
Wesam Al Amiri
Abhijeet Solanki
Douglas Alan Talbert
Nan Guo
Syed Rafay Hasan
author_sort Chern Chao Tai
collection DOAJ
description Autonomous vehicles require intelligent computer vision (CV) to perform critical navigational perception tasks. To achieve this, sensors such as camera, LiDAR and radar are utilized to provide data to artificial intelligence (AI) systems. Continuous monitoring of these intelligent CV systems is required to achieve a trustworthy AI system in a zero-trust (ZT) environment. This paper introduces a novel two-stage framework that provides a mechanism towards achieving this monitoring for a ZT environment. We made use of Monte Carlo (MC) dropout with One-Class Classification techniques to propose a framework towards trustworthy AI systems for AVs. Through extensive experimentation with varying noise levels and number of MC samples, we demonstrate that our framework achieves promising results in anomaly detection. In particular, our framework explores the trade-off between detection accuracy and computational overhead, where we achieved a high FPS of 46.4 with MC size of 5 while the accuracy is as low as 61.5%. This study would provide us with valuable insights for real-world AV applications.
format Article
id doaj-art-c74da8ffc1aa4f609e58caa20d770cc5
institution OA Journals
issn 2334-0754
2334-0762
language English
publishDate 2025-05-01
publisher LibraryPress@UF
record_format Article
series Proceedings of the International Florida Artificial Intelligence Research Society Conference
spelling doaj-art-c74da8ffc1aa4f609e58caa20d770cc52025-08-20T01:51:54ZengLibraryPress@UFProceedings of the International Florida Artificial Intelligence Research Society Conference2334-07542334-07622025-05-0138110.32473/flairs.38.1.138945Towards Trustworthy AI: Analyzing Model Uncertainty through Monte Carlo Dropout and Noise InjectionChern Chao Tai0Wesam Al AmiriAbhijeet SolankiDouglas Alan TalbertNan GuoSyed Rafay HasanTennessee Technological University Autonomous vehicles require intelligent computer vision (CV) to perform critical navigational perception tasks. To achieve this, sensors such as camera, LiDAR and radar are utilized to provide data to artificial intelligence (AI) systems. Continuous monitoring of these intelligent CV systems is required to achieve a trustworthy AI system in a zero-trust (ZT) environment. This paper introduces a novel two-stage framework that provides a mechanism towards achieving this monitoring for a ZT environment. We made use of Monte Carlo (MC) dropout with One-Class Classification techniques to propose a framework towards trustworthy AI systems for AVs. Through extensive experimentation with varying noise levels and number of MC samples, we demonstrate that our framework achieves promising results in anomaly detection. In particular, our framework explores the trade-off between detection accuracy and computational overhead, where we achieved a high FPS of 46.4 with MC size of 5 while the accuracy is as low as 61.5%. This study would provide us with valuable insights for real-world AV applications. https://journals.flvc.org/FLAIRS/article/view/138945
spellingShingle Chern Chao Tai
Wesam Al Amiri
Abhijeet Solanki
Douglas Alan Talbert
Nan Guo
Syed Rafay Hasan
Towards Trustworthy AI: Analyzing Model Uncertainty through Monte Carlo Dropout and Noise Injection
Proceedings of the International Florida Artificial Intelligence Research Society Conference
title Towards Trustworthy AI: Analyzing Model Uncertainty through Monte Carlo Dropout and Noise Injection
title_full Towards Trustworthy AI: Analyzing Model Uncertainty through Monte Carlo Dropout and Noise Injection
title_fullStr Towards Trustworthy AI: Analyzing Model Uncertainty through Monte Carlo Dropout and Noise Injection
title_full_unstemmed Towards Trustworthy AI: Analyzing Model Uncertainty through Monte Carlo Dropout and Noise Injection
title_short Towards Trustworthy AI: Analyzing Model Uncertainty through Monte Carlo Dropout and Noise Injection
title_sort towards trustworthy ai analyzing model uncertainty through monte carlo dropout and noise injection
url https://journals.flvc.org/FLAIRS/article/view/138945
work_keys_str_mv AT chernchaotai towardstrustworthyaianalyzingmodeluncertaintythroughmontecarlodropoutandnoiseinjection
AT wesamalamiri towardstrustworthyaianalyzingmodeluncertaintythroughmontecarlodropoutandnoiseinjection
AT abhijeetsolanki towardstrustworthyaianalyzingmodeluncertaintythroughmontecarlodropoutandnoiseinjection
AT douglasalantalbert towardstrustworthyaianalyzingmodeluncertaintythroughmontecarlodropoutandnoiseinjection
AT nanguo towardstrustworthyaianalyzingmodeluncertaintythroughmontecarlodropoutandnoiseinjection
AT syedrafayhasan towardstrustworthyaianalyzingmodeluncertaintythroughmontecarlodropoutandnoiseinjection