Analyzing Fairness of Computer Vision and Natural Language Processing Models

Machine learning (ML) algorithms play a critical role in decision-making across various domains, such as healthcare, finance, education, and law enforcement. However, concerns about fairness and bias in these systems have raised significant ethical and social challenges. To address these challenges,...

Full description

Saved in:
Bibliographic Details
Main Authors: Ahmed Rashed, Abdelkrim Kallich, Mohamed Eltayeb
Format: Article
Language:English
Published: MDPI AG 2025-02-01
Series:Information
Subjects:
Online Access:https://www.mdpi.com/2078-2489/16/3/182
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849342721434583040
author Ahmed Rashed
Abdelkrim Kallich
Mohamed Eltayeb
author_facet Ahmed Rashed
Abdelkrim Kallich
Mohamed Eltayeb
author_sort Ahmed Rashed
collection DOAJ
description Machine learning (ML) algorithms play a critical role in decision-making across various domains, such as healthcare, finance, education, and law enforcement. However, concerns about fairness and bias in these systems have raised significant ethical and social challenges. To address these challenges, this research utilizes two prominent fairness libraries, Fairlearn by Microsoft and AIF360 by IBM. These libraries offer comprehensive frameworks for fairness analysis, providing tools to evaluate fairness metrics, visualize results, and implement bias mitigation algorithms. The study focuses on assessing and mitigating biases for unstructured datasets using Computer Vision (CV) and Natural Language Processing (NLP) models. The primary objective is to present a comparative analysis of the performance of mitigation algorithms from the two fairness libraries. This analysis involves applying the algorithms individually, one at a time, in one of the stages of the ML lifecycle, pre-processing, in-processing, or post-processing, as well as sequentially across more than one stage. The results reveal that some sequential applications improve the performance of mitigation algorithms by effectively reducing bias while maintaining the model’s performance. Publicly available datasets from Kaggle were chosen for this research, providing a practical context for evaluating fairness in real-world machine learning workflows.
format Article
id doaj-art-e4c43afa3f7446c29caa9a6466725712
institution Kabale University
issn 2078-2489
language English
publishDate 2025-02-01
publisher MDPI AG
record_format Article
series Information
spelling doaj-art-e4c43afa3f7446c29caa9a64667257122025-08-20T03:43:16ZengMDPI AGInformation2078-24892025-02-0116318210.3390/info16030182Analyzing Fairness of Computer Vision and Natural Language Processing ModelsAhmed Rashed0Abdelkrim Kallich1Mohamed Eltayeb2Department of Physics, Shippensburg University of Pennsylvania, Franklin Science Center, 1871 Old Main Drive, Shippensburg, PA 17257, USADepartment of Physics, Shippensburg University of Pennsylvania, Franklin Science Center, 1871 Old Main Drive, Shippensburg, PA 17257, USADepartment of Data Science, College of Computer and Information Systems, Islamic University of Madinah, Al Jamiah Campus, Madinah 42351, Saudi ArabiaMachine learning (ML) algorithms play a critical role in decision-making across various domains, such as healthcare, finance, education, and law enforcement. However, concerns about fairness and bias in these systems have raised significant ethical and social challenges. To address these challenges, this research utilizes two prominent fairness libraries, Fairlearn by Microsoft and AIF360 by IBM. These libraries offer comprehensive frameworks for fairness analysis, providing tools to evaluate fairness metrics, visualize results, and implement bias mitigation algorithms. The study focuses on assessing and mitigating biases for unstructured datasets using Computer Vision (CV) and Natural Language Processing (NLP) models. The primary objective is to present a comparative analysis of the performance of mitigation algorithms from the two fairness libraries. This analysis involves applying the algorithms individually, one at a time, in one of the stages of the ML lifecycle, pre-processing, in-processing, or post-processing, as well as sequentially across more than one stage. The results reveal that some sequential applications improve the performance of mitigation algorithms by effectively reducing bias while maintaining the model’s performance. Publicly available datasets from Kaggle were chosen for this research, providing a practical context for evaluating fairness in real-world machine learning workflows.https://www.mdpi.com/2078-2489/16/3/182machine learning fairnessbias analysis
spellingShingle Ahmed Rashed
Abdelkrim Kallich
Mohamed Eltayeb
Analyzing Fairness of Computer Vision and Natural Language Processing Models
Information
machine learning fairness
bias analysis
title Analyzing Fairness of Computer Vision and Natural Language Processing Models
title_full Analyzing Fairness of Computer Vision and Natural Language Processing Models
title_fullStr Analyzing Fairness of Computer Vision and Natural Language Processing Models
title_full_unstemmed Analyzing Fairness of Computer Vision and Natural Language Processing Models
title_short Analyzing Fairness of Computer Vision and Natural Language Processing Models
title_sort analyzing fairness of computer vision and natural language processing models
topic machine learning fairness
bias analysis
url https://www.mdpi.com/2078-2489/16/3/182
work_keys_str_mv AT ahmedrashed analyzingfairnessofcomputervisionandnaturallanguageprocessingmodels
AT abdelkrimkallich analyzingfairnessofcomputervisionandnaturallanguageprocessingmodels
AT mohamedeltayeb analyzingfairnessofcomputervisionandnaturallanguageprocessingmodels