Reducing Bias in Classification using Fairness Stacking Meta-Learning

Authors

  • omar shakir Directorate of Educational Nineveh , Mosul, Iraq
  • Maan Y Anad Alsaleem Directorate of Educational Nineveh
  • Dindar M Ahmed Duhok Polytechnic University

DOI:

https://doi.org/10.25195/ijci.v51i2.629

Keywords:

Fairness, Bias Mitigation, Stacking Ensemble , Equal Opportunity Difference

Abstract

The predictive validity of machine learning models depends on the training data. In some cases, training data contains historical, social, or demographic inequalities, which leads algorithms to reproduce unfair results. This paper proposes a fairness-constrained stacking meta-learning approach for reducing bias in classification by aggregating a set of classifiers through a constrained ensemble learning scheme. A set of base classifiers, including Decision Tree, Naive Bayes, Support Vector Machine (SVM), and LightGBM, are trained and evaluated on the Adult Census Income dataset using both predictive and fairness metrics. The final meta-model is constructed as an aggregation of only the fair-performing models, while models failing to meet the fairness threshold are excluded. Learned weights are then optimized to maximize the F1-score while maintaining fairness constraints. Experimental results demonstrate that the proposed method achieves predictive performance (Accuracy = 0.91, F1-score = 0.82) while substantially reducing disparity between demographic groups (EOD = 0.03 for sex and 0.04 for race). These findings indicate that fairness-aware stacking ensembles can provide a solution for mitigating algorithmic bias through an aggregation framework that balances accuracy and fairness.

Downloads

Download data is not yet available.

Author Biography

Dindar M Ahmed, Duhok Polytechnic University

Department of Information Technology,
Technical College of Duhok

Downloads

Published

2025-11-12