Interactive Mitigation of Biases in Machine Learning Models for Undergraduate Student Admissions

Bias and fairness issues in artificial intelligence (AI) algorithms are major concerns, as people do not want to use software they cannot trust. Because these issues are intrinsically subjective and context-dependent, creating trustworthy software requires human input and feedback. (1) Introduction:...

Full description

Saved in:
Bibliographic Details
Main Authors: Kelly Van Busum, Shiaofen Fang
Format: Article
Language:English
Published: MDPI AG 2025-07-01
Series:AI
Subjects:
Online Access:https://www.mdpi.com/2673-2688/6/7/152
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Bias and fairness issues in artificial intelligence (AI) algorithms are major concerns, as people do not want to use software they cannot trust. Because these issues are intrinsically subjective and context-dependent, creating trustworthy software requires human input and feedback. (1) Introduction: This work introduces an interactive method for mitigating the bias introduced by machine learning models by allowing the user to adjust bias and fairness metrics iteratively to make the model more fair in the context of undergraduate student admissions. (2) Related Work: The social implications of bias in AI systems used in education are nuanced and can affect university reputation and student retention rates motivating a need for the development of fair AI systems. (3) Methods and Dataset: Admissions data over six years from a large urban research university was used to create AI models to predict admissions decisions. These AI models were analyzed to detect biases they may carry with respect to three variables chosen to represent sensitive populations: gender, race, and first-generation college students. We then describe a method for bias mitigation that uses a combination of machine learning and user interaction. (4) Results and Discussion: We use three scenarios to demonstrate that this interactive bias mitigation approach can successfully decrease the biases towards sensitive populations. (5) Conclusion: Our approach allows the user to examine a model and then iteratively and incrementally adjust bias and fairness metrics to change the training dataset and generate a modified AI model that is more fair, according to the user’s own determination of fairness.
ISSN:2673-2688