Towards Trustworthy AI: Analyzing Model Uncertainty through Monte Carlo Dropout and Noise Injection

Autonomous vehicles require intelligent computer vision (CV) to perform critical navigational perception tasks. To achieve this, sensors such as camera, LiDAR and radar are utilized to provide data to artificial intelligence (AI) systems. Continuous monitoring of these intelligent CV systems is req...

Full description

Saved in:
Bibliographic Details
Main Authors: Chern Chao Tai, Wesam Al Amiri, Abhijeet Solanki, Douglas Alan Talbert, Nan Guo, Syed Rafay Hasan
Format: Article
Language:English
Published: LibraryPress@UF 2025-05-01
Series:Proceedings of the International Florida Artificial Intelligence Research Society Conference
Online Access:https://journals.flvc.org/FLAIRS/article/view/138945
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Autonomous vehicles require intelligent computer vision (CV) to perform critical navigational perception tasks. To achieve this, sensors such as camera, LiDAR and radar are utilized to provide data to artificial intelligence (AI) systems. Continuous monitoring of these intelligent CV systems is required to achieve a trustworthy AI system in a zero-trust (ZT) environment. This paper introduces a novel two-stage framework that provides a mechanism towards achieving this monitoring for a ZT environment. We made use of Monte Carlo (MC) dropout with One-Class Classification techniques to propose a framework towards trustworthy AI systems for AVs. Through extensive experimentation with varying noise levels and number of MC samples, we demonstrate that our framework achieves promising results in anomaly detection. In particular, our framework explores the trade-off between detection accuracy and computational overhead, where we achieved a high FPS of 46.4 with MC size of 5 while the accuracy is as low as 61.5%. This study would provide us with valuable insights for real-world AV applications.
ISSN:2334-0754
2334-0762