Understanding Software Defect Prediction Through eXplainable Neural Additive Models

Software defect prediction, leveraging machine learning techniques to proactively identify potential defects in software systems, plays a crucial role in enhancing software quality and reliability. However, a major challenge in this field lies in the opacity of the prediction process and the lack of...

Full description

Saved in:
Bibliographic Details
Main Authors: Ruiqi He, Yong Li, Chi Sun
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10988540/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Software defect prediction, leveraging machine learning techniques to proactively identify potential defects in software systems, plays a crucial role in enhancing software quality and reliability. However, a major challenge in this field lies in the opacity of the prediction process and the lack of interpretability of the results, which significantly limits its practical application. To address this issue, this paper introduces eXplainable Neural Additive Models (XNAMs). The proposed model constructs single-feature inputs for software defect data, enabling transparent visualization of the impact of individual features on prediction outcomes. Additionally, it employs feature gradient analysis to examine the average absolute values of feature gradients during forward propagation, thereby quantifying and comparing the contribution of each feature to the decision-making process. Furthermore, feature interaction analysis is conducted to uncover nonlinear interactions between different features. Experimental evaluations on six software projects demonstrate that XNAMs outperform existing models in prediction performance while offering clear explanations of feature contributions, ensuring high transparency and practical applicability.
ISSN:2169-3536