Fairness in Binary Classification
To detect and mitigate societal bias in binary classification, you can use the
fairnessMetrics
, fairnessWeights
,
disparateImpactRemover
, and fairnessThresholder
functions in Statistics and Machine Learning Toolbox™. First, use fairnessMetrics
to
evaluate the fairness of a data set or classification model using bias and group metrics. Then,
use fairnessWeights
to
reweight observations, disparateImpactRemover
to remove the disparate impact of a sensitive attribute, or
fairnessThresholder
to optimize the classification threshold.
The fairnessWeights
and disparateImpactRemover
functions provide preprocessing techniques that allow you to adjust your predictor data before
training (or retraining) a classifier. The fairnessThresholder
function
provides a postprocessing technique that adjusts labels near prediction boundaries for a
trained classifier. To assess the final model behavior, you can use the
fairnessMetrics
function as well as various interpretability functions.
For more information, see Interpret Machine Learning Models.
Functions
Topics
- Introduction to Fairness in Binary Classification
Detect and mitigate societal bias in machine learning by using the
fairnessMetrics
,fairnessWeights
,disparateImpactRemover
, andfairnessThresholder
functions.
Related Information
- Explore Fairness Metrics for Credit Scoring Model (Risk Management Toolbox)
- Bias Mitigation in Credit Scoring by Reweighting (Risk Management Toolbox)
- Bias Mitigation in Credit Scoring by Disparate Impact Removal (Risk Management Toolbox)