MO2.R3.1

Group Fairness with Uncertain Sensitive Attributes

Abhin Shah, Maohao Shen, Jongha Ryu, MIT, United States; Subhro Das, Prasanna Sattigeri, IBM Research, United States; Yuheng Bu, University of Florida, United States; Gregory Wornell, MIT, United States

Session:
Fairness

Track:
16: Fairness

Location:
Ypsilon IV-V-VI

Presentation Time:
Mon, 8 Jul, 11:50 - 12:10

Session Chair:
Flavio Calmon, Harvard Univeristy
Abstract
Learning a fair predictive model is crucial to mitigate biased decisions against minority groups in high-stakes applications. A common approach to learn such a model involves solving an optimization problem that maximizes the predictive power of the model under an appropriate group fairness constraint. However, in practice, sensitive attributes are often missing or noisy resulting in uncertainty, and solely enforcing fairness constraints on uncertain sensitive attributes can fall significantly short of achieving the level of fairness without uncertainty. To understand this phenomenon, we consider the problem of fair learning for Gaussian data and reduce it to a quadratically constrained quadratic problem (QCQP). To ensure a strict fairness guarantee given uncertain sensitive attributes, we propose a robust QCQP and characterize its solution with an intuitive geometric understanding. When uncertainty arises due to limited labeled sensitive attributes, our analysis identifies non-trivial regimes where uncertainty incurs no performance loss while continuing to guarantee strict fairness. As an illustrative example of our analysis, we propose a bootstrap-based algorithm that applies beyond the Gaussian case. We demonstrate the value of our analysis and algorithm on synthetic as well as real-world data.
Resources