Improving fairness in machine learning-enabled affirmative actions: a case study in outreach activities in healthcare

David Barrera Ferro, Sally Brailsford, Adriane Chapman

Producción: Contribución a una revistaArtículorevisión exhaustiva

Resumen

Over the last decade, due to the growing availability of data and computational resources, machine learning (ML) approaches have started to play a key role in the implementation of affirmative-action policies and programs. The underlying assumption is that resource allocation can be informed by the prediction of individual risks, improving the prioritization of the potential beneficiaries, and increasing the performance of the system. Therefore, it is important to ensure that biases in the data or the algorithms do not lead to treating some individuals unfavourably. In particular, the notion of group-based fairness seeks to ensure that individuals will not be discriminated against on the basis of their group’s protected characteristics. This work proposes an optimization model to improve fairness in ML-enabled affirmative actions, following a post-processing approach. Our case study is an outreach program to increase cervical cancer screening among hard-to-reach women in Bogotá, Colombia. Bias may occur since the protected group (women in the most severe poverty) are under-represented in the data. Computational experiments show that it is possible to address ML bias while maintaining high levels of accuracy.
Idioma originalInglés
Páginas (desde-hasta)1–12
PublicaciónJournal of the Operational Research Society
DOI
EstadoPublicada - 24 may. 2024

Huella

Profundice en los temas de investigación de 'Improving fairness in machine learning-enabled affirmative actions: a case study in outreach activities in healthcare'. En conjunto forman una huella única.

Citar esto