Improving fairness in machine learning-enabled affirmative actions: a case study in outreach activities in healthcare

David Barrera Ferro, Sally Brailsford, Adriane Chapman

Research output: Contribution to journalArticlepeer-review

Abstract

Over the last decade, due to the growing availability of data and computational resources, machine learning (ML) approaches have started to play a key role in the implementation of affirmative-action policies and programs. The underlying assumption is that resource allocation can be informed by the prediction of individual risks, improving the prioritization of the potential beneficiaries, and increasing the performance of the system. Therefore, it is important to ensure that biases in the data or the algorithms do not lead to treating some individuals unfavourably. In particular, the notion of group-based fairness seeks to ensure that individuals will not be discriminated against on the basis of their group’s protected characteristics. This work proposes an optimization model to improve fairness in ML-enabled affirmative actions, following a post-processing approach. Our case study is an outreach program to increase cervical cancer screening among hard-to-reach women in Bogotá, Colombia. Bias may occur since the protected group (women in the most severe poverty) are under-represented in the data. Computational experiments show that it is possible to address ML bias while maintaining high levels of accuracy.
Original languageEnglish
Pages (from-to)1–12
JournalJournal of the Operational Research Society
DOIs
StatePublished - 24 May 2024

Keywords

  • preventive healthcare programs
  • fairness
  • Machine learning

Fingerprint

Dive into the research topics of 'Improving fairness in machine learning-enabled affirmative actions: a case study in outreach activities in healthcare'. Together they form a unique fingerprint.

Cite this