Improving classifier decision boundaries and interpretability using nearest neighbors
Abstract Neural networks often fail to learn optimal decision boundaries. In this study, we show that these boundaries are typically situated in regions with low training data density, making them highly sensitive to a small number of samples, which, in turn, increases the risk of overfitting. To ad...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer
2025-07-01
|
| Series: | Discover Artificial Intelligence |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s44163-025-00369-8 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract Neural networks often fail to learn optimal decision boundaries. In this study, we show that these boundaries are typically situated in regions with low training data density, making them highly sensitive to a small number of samples, which, in turn, increases the risk of overfitting. To address this issue, we propose a simple algorithm that performs a weighted average of a sample’s prediction and those of its nearest neighbors (computed in latent space), leading to minor but favorable outcomes across a variety of important performance measures for neural networks. Through diverse evaluations using both self-trained and state-of-the-art pre-trained convolutional neural networks, we show that our framework enhances (i) resistance to label noise, (ii) robustness against adversarial attacks, (iii) classification accuracy, and offers novel approaches for (iv) interpretability. Notably, the interpretability aspect is particularly relevant to the Explainable AI community, as our approach is broadly applicable to any network architecture. Although the improvements may not be large in all four areas, the proposed solution is conceptually simple and requires no modifications to the network architecture, training procedure, or dataset. Unlike prior methods, our approach achieves improvements without introducing trade-offs or necessitating architectural adaptations, while providing actionable insights and theoretical analysis to support its efficacy. |
|---|---|
| ISSN: | 2731-0809 |