Bringing Machine Learning Classifiers Into Critical Cyber-Physical Systems: A Matter of Design
Machine Learning (ML) models are increasingly used by domain experts to tackle classification tasks, aiming for high predictive accuracy. However, classifiers are inherently prone to misclassifications, especially when they encounter unfamiliar, previously unseen or out-of-distribution input data. T...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10994486/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850132605993746432 |
|---|---|
| author | Burcu Sayin Tommaso Zoppi Nicolo Marchini Fahad Ahmed Khokhar Andrea Passerini |
| author_facet | Burcu Sayin Tommaso Zoppi Nicolo Marchini Fahad Ahmed Khokhar Andrea Passerini |
| author_sort | Burcu Sayin |
| collection | DOAJ |
| description | Machine Learning (ML) models are increasingly used by domain experts to tackle classification tasks, aiming for high predictive accuracy. However, classifiers are inherently prone to misclassifications, especially when they encounter unfamiliar, previously unseen or out-of-distribution input data. This creates significant challenges for their deployment in critical Cyber-Physical Systems (CPSs)—such as autonomous vehicles, industrial control systems, and medical devices—where misclassifications can lead to severe consequences for people, infrastructure, and the environment. This paper argues that ML classifiers intended for critical applications should not be designed nor evaluated in isolation. Instead, Critical System Classifiers (CSCs) primarily aim at reducing misclassifications by rejecting uncertain predictions and trigger mitigation strategies integrated into the encompassing CPS. We present a high-level CSC architecture that supports black-box classifier integration, preprocessing for unknown detection, post-hoc calibration, and cost-sensitive thresholding. We emphasize the need for cost-aware evaluation metrics that explicitly account for rejected predictions, enabling a more realistic assessment of classifier performance in critical systems. We validate our approach through experiments on tabular datasets related to failure prediction, intrusion detection, and error detection—common use cases for classifiers in CPSs. Key findings include: 1) cost-sensitive evaluation often leads to the selection of different classifiers than standard metrics suggest; 2) tree-based models outperform statistical ones in classification tasks; 3) calibration and rejection mechanisms provide a robust notion of confidence; and 4) combining multiple uncertainty-based rejection strategies achieves a favorable trade-off between high accuracy, low rejection rates, and cost. All experiments and implementations are publicly available via our CINNABAR GitHub repository. Overall, this study offers a system-level perspective and practical software architecture for safely deploying ML classifiers in critical CPS domains, paving the way toward more trustworthy and certifiable AI in real-world infrastructures. |
| format | Article |
| id | doaj-art-bac8e6a079074e85893ee3dfc2d7d04c |
| institution | OA Journals |
| issn | 2169-3536 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-bac8e6a079074e85893ee3dfc2d7d04c2025-08-20T02:32:10ZengIEEEIEEE Access2169-35362025-01-0113948589487710.1109/ACCESS.2025.356850110994486Bringing Machine Learning Classifiers Into Critical Cyber-Physical Systems: A Matter of DesignBurcu Sayin0https://orcid.org/0000-0001-6804-127XTommaso Zoppi1https://orcid.org/0000-0001-9820-6047Nicolo Marchini2https://orcid.org/0009-0001-9632-1794Fahad Ahmed Khokhar3https://orcid.org/0009-0008-7890-4639Andrea Passerini4Department of Information Engineering and Computer Science, University of Trento, Povo, ItalyDepartment of Mathematics and Computer Science, University of Florence, Florence, ItalyDepartment of Information Engineering and Computer Science, University of Trento, Povo, ItalyDepartment of Mathematics and Computer Science, University of Florence, Florence, ItalyDepartment of Information Engineering and Computer Science, University of Trento, Povo, ItalyMachine Learning (ML) models are increasingly used by domain experts to tackle classification tasks, aiming for high predictive accuracy. However, classifiers are inherently prone to misclassifications, especially when they encounter unfamiliar, previously unseen or out-of-distribution input data. This creates significant challenges for their deployment in critical Cyber-Physical Systems (CPSs)—such as autonomous vehicles, industrial control systems, and medical devices—where misclassifications can lead to severe consequences for people, infrastructure, and the environment. This paper argues that ML classifiers intended for critical applications should not be designed nor evaluated in isolation. Instead, Critical System Classifiers (CSCs) primarily aim at reducing misclassifications by rejecting uncertain predictions and trigger mitigation strategies integrated into the encompassing CPS. We present a high-level CSC architecture that supports black-box classifier integration, preprocessing for unknown detection, post-hoc calibration, and cost-sensitive thresholding. We emphasize the need for cost-aware evaluation metrics that explicitly account for rejected predictions, enabling a more realistic assessment of classifier performance in critical systems. We validate our approach through experiments on tabular datasets related to failure prediction, intrusion detection, and error detection—common use cases for classifiers in CPSs. Key findings include: 1) cost-sensitive evaluation often leads to the selection of different classifiers than standard metrics suggest; 2) tree-based models outperform statistical ones in classification tasks; 3) calibration and rejection mechanisms provide a robust notion of confidence; and 4) combining multiple uncertainty-based rejection strategies achieves a favorable trade-off between high accuracy, low rejection rates, and cost. All experiments and implementations are publicly available via our CINNABAR GitHub repository. Overall, this study offers a system-level perspective and practical software architecture for safely deploying ML classifiers in critical CPS domains, paving the way toward more trustworthy and certifiable AI in real-world infrastructures.https://ieeexplore.ieee.org/document/10994486/Critical infrastructurescost-sensitive learningcyber-physical systemsmachine learningprediction rejectionsafety |
| spellingShingle | Burcu Sayin Tommaso Zoppi Nicolo Marchini Fahad Ahmed Khokhar Andrea Passerini Bringing Machine Learning Classifiers Into Critical Cyber-Physical Systems: A Matter of Design IEEE Access Critical infrastructures cost-sensitive learning cyber-physical systems machine learning prediction rejection safety |
| title | Bringing Machine Learning Classifiers Into Critical Cyber-Physical Systems: A Matter of Design |
| title_full | Bringing Machine Learning Classifiers Into Critical Cyber-Physical Systems: A Matter of Design |
| title_fullStr | Bringing Machine Learning Classifiers Into Critical Cyber-Physical Systems: A Matter of Design |
| title_full_unstemmed | Bringing Machine Learning Classifiers Into Critical Cyber-Physical Systems: A Matter of Design |
| title_short | Bringing Machine Learning Classifiers Into Critical Cyber-Physical Systems: A Matter of Design |
| title_sort | bringing machine learning classifiers into critical cyber physical systems a matter of design |
| topic | Critical infrastructures cost-sensitive learning cyber-physical systems machine learning prediction rejection safety |
| url | https://ieeexplore.ieee.org/document/10994486/ |
| work_keys_str_mv | AT burcusayin bringingmachinelearningclassifiersintocriticalcyberphysicalsystemsamatterofdesign AT tommasozoppi bringingmachinelearningclassifiersintocriticalcyberphysicalsystemsamatterofdesign AT nicolomarchini bringingmachinelearningclassifiersintocriticalcyberphysicalsystemsamatterofdesign AT fahadahmedkhokhar bringingmachinelearningclassifiersintocriticalcyberphysicalsystemsamatterofdesign AT andreapasserini bringingmachinelearningclassifiersintocriticalcyberphysicalsystemsamatterofdesign |