TY - JOUR
T1 - Improving fairness in machine learning-enabled affirmative actions
T2 - a case study in outreach activities in healthcare
AU - Barrera Ferro, David
AU - Brailsford, Sally
AU - Chapman, Adriane
N1 - Barrera Ferro, D., Brailsford, S., & Chapman, A. (2024). Improving fairness in machine learning-enabled affirmative actions: a case study in outreach activities in healthcare. Journal of the Operational Research Society, 1–12. https://doi.org/10.1080/01605682.2024.2354364
PY - 2024/5/24
Y1 - 2024/5/24
N2 - Over the last decade, due to the growing availability of data and computational resources, machine learning (ML) approaches have started to play a key role in the implementation of affirmative-action policies and programs. The underlying assumption is that resource allocation can be informed by the prediction of individual risks, improving the prioritization of the potential beneficiaries, and increasing the performance of the system. Therefore, it is important to ensure that biases in the data or the algorithms do not lead to treating some individuals unfavourably. In particular, the notion of group-based fairness seeks to ensure that individuals will not be discriminated against on the basis of their group’s protected characteristics. This work proposes an optimization model to improve fairness in ML-enabled affirmative actions, following a post-processing approach. Our case study is an outreach program to increase cervical cancer screening among hard-to-reach women in Bogotá, Colombia. Bias may occur since the protected group (women in the most severe poverty) are under-represented in the data. Computational experiments show that it is possible to address ML bias while maintaining high levels of accuracy.
AB - Over the last decade, due to the growing availability of data and computational resources, machine learning (ML) approaches have started to play a key role in the implementation of affirmative-action policies and programs. The underlying assumption is that resource allocation can be informed by the prediction of individual risks, improving the prioritization of the potential beneficiaries, and increasing the performance of the system. Therefore, it is important to ensure that biases in the data or the algorithms do not lead to treating some individuals unfavourably. In particular, the notion of group-based fairness seeks to ensure that individuals will not be discriminated against on the basis of their group’s protected characteristics. This work proposes an optimization model to improve fairness in ML-enabled affirmative actions, following a post-processing approach. Our case study is an outreach program to increase cervical cancer screening among hard-to-reach women in Bogotá, Colombia. Bias may occur since the protected group (women in the most severe poverty) are under-represented in the data. Computational experiments show that it is possible to address ML bias while maintaining high levels of accuracy.
KW - preventive healthcare programs
KW - fairness
KW - Machine learning
UR - http://dx.doi.org/10.1080/01605682.2024.2354364
U2 - 10.1080/01605682.2024.2354364
DO - 10.1080/01605682.2024.2354364
M3 - Article
SN - 0160-5682
SP - 1
EP - 12
JO - Journal of the Operational Research Society
JF - Journal of the Operational Research Society
ER -