Update cookies preferences
 logo Idiap Research Institute        
 [BibTeX] [Marc21]
Optimizing Fairness and Utility in Healthcare Machine Learning Models
Type of publication: Journal paper
Citation: Fatoreto_ASOC_2025
Publication status: Published
Journal: Applied Soft Computing
Volume: 181
Number: 113426
Year: 2025
Month: September
Abstract: Demographic fairness or equity is a crucial aspect of machine learning models, particularly in critical domains such as healthcare, where errors can have severe consequences. A fair model avoids making distinctions between groups with different sensitive or protected attributes. Although several metrics are available to measure fairness and ensure equity between groups, proposed solutions must uphold fairness without compromising the utility of these models. Optimization problems that incorporate both fairness and utility information can help find the best machine learning model. Mathematical programming emerges as a valuable tool in this context. One approach is to use optimization functions with constraints, imposing a maximum difference between groups. In this sense, considering multiple constraints that encompass various sensitive attributes present in the dataset when adjusting the models is essential to ensure intersectional fairness, minimize hidden biases, and promote equitable decisions in diverse contexts. In this work, we propose to constrain the minimization of the loss function with multiple fairness-related metrics, ensuring that fairness metrics do not exceed a maximum limit concerning the impartiality of the decision boundary. We use metrics derived from Pareto fronts, a method used in multi-objective optimization, adapting it for single-objective optimization and incorporating fairness characteristics into the constraints. The points observed in this graph use different fairness thresholds. We compare our proposed model with existing literature and demonstrate the convergence of our model to logistic regression with simulated data. Furthermore, we apply this strategy to health-related datasets and other domains present in most fairness and optimization articles. As a result, we found that, using the proposed metrics, our model performs better, even with imbalanced data concerning sensitive attributes and smaller datasets.
Main Research Program: AI for Everyone
Additional Research Programs: AI for Life
Keywords:
Projects: FAIRMI
Authors: Fatoreto, Maira
Özbulak, Gökhan
Berton, Lilian
Anjos, André
Added by: [UNK]
Total mark: 0
Attachments
    Notes