Fairness Adequacy Test For Machine Learning System
Loading...
Date
Authors
Akinola, Kehinde Oluwasayo
Journal Title
Journal ISSN
Volume Title
Publisher
East Carolina University
Abstract
As Machine Learning (ML) systems assume a larger role in decisions that affect
people’s lives, such as who receives a loan, access to healthcare, or early release from
prison, ensuring these systems are fair is more important than ever. However, current
fairness checks often miss the subtle and complex ways in which bias can appear in
algorithms.
This dissertation tackles that gap by proposing a practical framework for testing
how well machine learning models meet fairness standards, especially across protected
groups. Instead of relying solely on standard performance metrics, the approach com-
bines statistical tools with stress testing techniques to uncover hidden or overlooked
biases.
There are four main contributions. First, we introduce a fairness adequacy test
using metrics like Equal Opportunity Difference (EOD) and Equalised Odds Metrics
(EOM) to examine disparities in error rates across groups. Second, we apply mutation
testing by altering sensitive features such as race or gender to see how model outputs
change, helping assess fairness under different conditions. Third, we use permutation
methods to simulate edge cases and test how models respond to unusual or extreme
inputs. Finally, we validate this approach with real-world case studies in areas like
healthcare, finance, and criminal justice, where fairness is especially critical.
By offering a clear and testable way to evaluate fairness, this work aims to support
the development of more trustworthy, accountable, and equitable AI systems
