Measuring the Impact of Scene Level Objects: A Novel Method for Quantitative Explanations
Although precision, recall, and other common metrics can provide a useful window into the performance of an object detection model, they lack a deeper view of the model’s decision process. Regardless of the quality of the training data and process, the features that an object detection model learns...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
LibraryPress@UF
2025-05-01
|
| Series: | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| Subjects: | |
| Online Access: | https://journals.flvc.org/FLAIRS/article/view/138922 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849321668115169280 |
|---|---|
| author | Lynn Vonderhaar Timothy Elvira Omar Ochoa |
| author_facet | Lynn Vonderhaar Timothy Elvira Omar Ochoa |
| author_sort | Lynn Vonderhaar |
| collection | DOAJ |
| description | Although precision, recall, and other common metrics can provide a useful window into the performance of an object detection model, they lack a deeper view of the model’s decision process. Regardless of the quality of the training data and process, the features that an object detection model learns cannot be guaranteed. A model may learn a relationship between certain background context, i.e., scene level objects, and the presence of the labeled classes. Furthermore, standard performance metrics would not identify this phenomenon. This paper presents a black box explainability method for additional verification of object detection models by finding the impact of scene level objects on the identification of the classes within the image. By comparing the mean Average Precision (mAP) of a model on test data with and without certain scene level objects, the contributions of these objects to the model’s performance becomes clearer. This work presents two experiments to test the method. The experiment results provide quantitative explanations of the object detection model’s decision process, enabling a deeper understanding of the model’s performance.
|
| format | Article |
| id | doaj-art-1b0a82c669c244bca78c70f5f058cfef |
| institution | Kabale University |
| issn | 2334-0754 2334-0762 |
| language | English |
| publishDate | 2025-05-01 |
| publisher | LibraryPress@UF |
| record_format | Article |
| series | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| spelling | doaj-art-1b0a82c669c244bca78c70f5f058cfef2025-08-20T03:49:41ZengLibraryPress@UFProceedings of the International Florida Artificial Intelligence Research Society Conference2334-07542334-07622025-05-0138110.32473/flairs.38.1.138922Measuring the Impact of Scene Level Objects: A Novel Method for Quantitative ExplanationsLynn Vonderhaar0Timothy Elvira1Omar Ochoa2Embry-Riddle Aeronautical UniversityEmbry-Riddle Aeronautical UniversityEmbry-Riddle Aeronautical UniversityAlthough precision, recall, and other common metrics can provide a useful window into the performance of an object detection model, they lack a deeper view of the model’s decision process. Regardless of the quality of the training data and process, the features that an object detection model learns cannot be guaranteed. A model may learn a relationship between certain background context, i.e., scene level objects, and the presence of the labeled classes. Furthermore, standard performance metrics would not identify this phenomenon. This paper presents a black box explainability method for additional verification of object detection models by finding the impact of scene level objects on the identification of the classes within the image. By comparing the mean Average Precision (mAP) of a model on test data with and without certain scene level objects, the contributions of these objects to the model’s performance becomes clearer. This work presents two experiments to test the method. The experiment results provide quantitative explanations of the object detection model’s decision process, enabling a deeper understanding of the model’s performance. https://journals.flvc.org/FLAIRS/article/view/138922ExplainabilityMachine LearningBlack Box ModelScene Level ObjectsContext |
| spellingShingle | Lynn Vonderhaar Timothy Elvira Omar Ochoa Measuring the Impact of Scene Level Objects: A Novel Method for Quantitative Explanations Proceedings of the International Florida Artificial Intelligence Research Society Conference Explainability Machine Learning Black Box Model Scene Level Objects Context |
| title | Measuring the Impact of Scene Level Objects: A Novel Method for Quantitative Explanations |
| title_full | Measuring the Impact of Scene Level Objects: A Novel Method for Quantitative Explanations |
| title_fullStr | Measuring the Impact of Scene Level Objects: A Novel Method for Quantitative Explanations |
| title_full_unstemmed | Measuring the Impact of Scene Level Objects: A Novel Method for Quantitative Explanations |
| title_short | Measuring the Impact of Scene Level Objects: A Novel Method for Quantitative Explanations |
| title_sort | measuring the impact of scene level objects a novel method for quantitative explanations |
| topic | Explainability Machine Learning Black Box Model Scene Level Objects Context |
| url | https://journals.flvc.org/FLAIRS/article/view/138922 |
| work_keys_str_mv | AT lynnvonderhaar measuringtheimpactofscenelevelobjectsanovelmethodforquantitativeexplanations AT timothyelvira measuringtheimpactofscenelevelobjectsanovelmethodforquantitativeexplanations AT omarochoa measuringtheimpactofscenelevelobjectsanovelmethodforquantitativeexplanations |