Improving long‐tail classification via decoupling and regularisation
Abstract Real‐world data always exhibit an imbalanced and long‐tailed distribution, which leads to poor performance for neural network‐based classification. Existing methods mainly tackle this problem by reweighting the loss function or rebalancing the classifier. However, one crucial aspect overloo...
Saved in:
| Main Authors: | , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2025-02-01
|
| Series: | CAAI Transactions on Intelligence Technology |
| Subjects: | |
| Online Access: | https://doi.org/10.1049/cit2.12374 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850035848221818880 |
|---|---|
| author | Shuzheng Gao Chaozheng Wang Cuiyun Gao Wenjian Luo Peiyi Han Qing Liao Guandong Xu |
| author_facet | Shuzheng Gao Chaozheng Wang Cuiyun Gao Wenjian Luo Peiyi Han Qing Liao Guandong Xu |
| author_sort | Shuzheng Gao |
| collection | DOAJ |
| description | Abstract Real‐world data always exhibit an imbalanced and long‐tailed distribution, which leads to poor performance for neural network‐based classification. Existing methods mainly tackle this problem by reweighting the loss function or rebalancing the classifier. However, one crucial aspect overlooked by previous research studies is the imbalanced feature space problem caused by the imbalanced angle distribution. In this paper, the authors shed light on the significance of the angle distribution in achieving a balanced feature space, which is essential for improving model performance under long‐tailed distributions. Nevertheless, it is challenging to effectively balance both the classifier norms and angle distribution due to problems such as the low feature norm. To tackle these challenges, the authors first thoroughly analyse the classifier and feature space by decoupling the classification logits into three key components: classifier norm (i.e. the magnitude of the classifier vector), feature norm (i.e. the magnitude of the feature vector), and cosine similarity between the classifier vector and feature vector. In this way, the authors analyse the change of each component in the training process and reveal three critical problems that should be solved, that is, the imbalanced angle distribution, the lack of feature discrimination, and the low feature norm. Drawing from this analysis, the authors propose a novel loss function that incorporates hyperspherical uniformity, additive angular margin, and feature norm regularisation. Each component of the loss function addresses a specific problem and synergistically contributes to achieving a balanced classifier and feature space. The authors conduct extensive experiments on three popular benchmark datasets including CIFAR‐10/100‐LT, ImageNet‐LT, and iNaturalist 2018. The experimental results demonstrate that the authors’ loss function outperforms several previous state‐of‐the‐art methods in addressing the challenges posed by imbalanced and long‐tailed datasets, that is, by improving upon the best‐performing baselines on CIFAR‐100‐LT by 1.34, 1.41, 1.41 and 1.33, respectively. |
| format | Article |
| id | doaj-art-d1dc2aac5e8b44b38bb1b0062ece9972 |
| institution | DOAJ |
| issn | 2468-2322 |
| language | English |
| publishDate | 2025-02-01 |
| publisher | Wiley |
| record_format | Article |
| series | CAAI Transactions on Intelligence Technology |
| spelling | doaj-art-d1dc2aac5e8b44b38bb1b0062ece99722025-08-20T02:57:21ZengWileyCAAI Transactions on Intelligence Technology2468-23222025-02-01101627110.1049/cit2.12374Improving long‐tail classification via decoupling and regularisationShuzheng Gao0Chaozheng Wang1Cuiyun Gao2Wenjian Luo3Peiyi Han4Qing Liao5Guandong Xu6School of Computer Science and Technology Harbin Institute of Technology Shenzhen ChinaSchool of Computer Science and Technology Harbin Institute of Technology Shenzhen ChinaSchool of Computer Science and Technology Harbin Institute of Technology Shenzhen ChinaSchool of Computer Science and Technology Harbin Institute of Technology Shenzhen ChinaSchool of Computer Science and Technology Harbin Institute of Technology Shenzhen ChinaSchool of Computer Science and Technology Harbin Institute of Technology Shenzhen ChinaFaculty of Engineering and Information Technology University of Technology Sydney Sydney New South Wales AustraliaAbstract Real‐world data always exhibit an imbalanced and long‐tailed distribution, which leads to poor performance for neural network‐based classification. Existing methods mainly tackle this problem by reweighting the loss function or rebalancing the classifier. However, one crucial aspect overlooked by previous research studies is the imbalanced feature space problem caused by the imbalanced angle distribution. In this paper, the authors shed light on the significance of the angle distribution in achieving a balanced feature space, which is essential for improving model performance under long‐tailed distributions. Nevertheless, it is challenging to effectively balance both the classifier norms and angle distribution due to problems such as the low feature norm. To tackle these challenges, the authors first thoroughly analyse the classifier and feature space by decoupling the classification logits into three key components: classifier norm (i.e. the magnitude of the classifier vector), feature norm (i.e. the magnitude of the feature vector), and cosine similarity between the classifier vector and feature vector. In this way, the authors analyse the change of each component in the training process and reveal three critical problems that should be solved, that is, the imbalanced angle distribution, the lack of feature discrimination, and the low feature norm. Drawing from this analysis, the authors propose a novel loss function that incorporates hyperspherical uniformity, additive angular margin, and feature norm regularisation. Each component of the loss function addresses a specific problem and synergistically contributes to achieving a balanced classifier and feature space. The authors conduct extensive experiments on three popular benchmark datasets including CIFAR‐10/100‐LT, ImageNet‐LT, and iNaturalist 2018. The experimental results demonstrate that the authors’ loss function outperforms several previous state‐of‐the‐art methods in addressing the challenges posed by imbalanced and long‐tailed datasets, that is, by improving upon the best‐performing baselines on CIFAR‐100‐LT by 1.34, 1.41, 1.41 and 1.33, respectively.https://doi.org/10.1049/cit2.12374computer visionimage classificationlong‐tailed datamachine learning |
| spellingShingle | Shuzheng Gao Chaozheng Wang Cuiyun Gao Wenjian Luo Peiyi Han Qing Liao Guandong Xu Improving long‐tail classification via decoupling and regularisation CAAI Transactions on Intelligence Technology computer vision image classification long‐tailed data machine learning |
| title | Improving long‐tail classification via decoupling and regularisation |
| title_full | Improving long‐tail classification via decoupling and regularisation |
| title_fullStr | Improving long‐tail classification via decoupling and regularisation |
| title_full_unstemmed | Improving long‐tail classification via decoupling and regularisation |
| title_short | Improving long‐tail classification via decoupling and regularisation |
| title_sort | improving long tail classification via decoupling and regularisation |
| topic | computer vision image classification long‐tailed data machine learning |
| url | https://doi.org/10.1049/cit2.12374 |
| work_keys_str_mv | AT shuzhenggao improvinglongtailclassificationviadecouplingandregularisation AT chaozhengwang improvinglongtailclassificationviadecouplingandregularisation AT cuiyungao improvinglongtailclassificationviadecouplingandregularisation AT wenjianluo improvinglongtailclassificationviadecouplingandregularisation AT peiyihan improvinglongtailclassificationviadecouplingandregularisation AT qingliao improvinglongtailclassificationviadecouplingandregularisation AT guandongxu improvinglongtailclassificationviadecouplingandregularisation |