Improving classifier decision boundaries and interpretability using nearest neighbors
Abstract Neural networks often fail to learn optimal decision boundaries. In this study, we show that these boundaries are typically situated in regions with low training data density, making them highly sensitive to a small number of samples, which, in turn, increases the risk of overfitting. To ad...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer
2025-07-01
|
| Series: | Discover Artificial Intelligence |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s44163-025-00369-8 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|