Enhancing Multi-Label Chest X-Ray Classification Using an Improved Ranking Loss

This article addresses the non-trivial problem of classifying thoracic diseases in chest X-ray (CXR) images. A single CXR image may exhibit multiple diseases, making this a multi-label classification problem. Additionally, the inherent class imbalance makes the task even more challenging as some dis...

Full description

Saved in:
Bibliographic Details
Main Authors: Muhammad Shehzad Hanif, Muhammad Bilal, Abdullah H. Alsaggaf, Ubaid M. Al-Saggaf
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:Bioengineering
Subjects:
Online Access:https://www.mdpi.com/2306-5354/12/6/593
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This article addresses the non-trivial problem of classifying thoracic diseases in chest X-ray (CXR) images. A single CXR image may exhibit multiple diseases, making this a multi-label classification problem. Additionally, the inherent class imbalance makes the task even more challenging as some diseases occur more frequently than others. Our methodology is based on transfer learning aiming to fine-tune a pretrained DenseNet121 model using CXR images from the NIH Chest X-ray14 dataset. Training from scratch would require a large-scale dataset containing millions of images, which is not available in the public domain for this multi-label classification task. To address class imbalance problem, we propose a rank-based loss derived from the Zero-bounded Log-sum-exp and Pairwise Rank-based (ZLPR) loss, which we refer to as focal ZLPR (FZLPR). In designing FZLPR, we draw inspiration from the focal loss where the objective is to emphasize hard-to-classify examples (instances of rare diseases) during training compared to well-classified ones. We achieve this by incorporating a “temperature” parameter to scale the label scores predicted by the model during training in the original ZLPR loss function. Experimental results on the NIH Chest X-ray14 dataset demonstrate that FZLPR loss outperforms other loss functions including binary cross entropy (BCE) and focal loss. Moreover, by using test-time augmentations, our model trained using FZLPR loss achieves an average AUC of 80.96% which is competitive with existing approaches.
ISSN:2306-5354