MO3.R3.2

Controlled privacy leakage propagation throughout differential private overlapping grouped learning

Shahrzad Kiani, University of Toronto, Canada; Franziska Boenisch, CISPA, Germany; Stark C. Draper, University of Toronto, Canada

Session:
Differential Privacy in Learning 1

Track:
16: Privacy and Fairness

Location:
Ypsilon IV-V-VI

Presentation Time:
Mon, 8 Jul, 14:55 - 15:15

Session Chair:
Oliver Kosut, Arizona State University
Abstract
Federated Learning (FL) is a privacy-centric framework for distributed learning where devices collaborate to develop a shared global model while keeping their raw data local. Since workers may naturally form groups based on common objectives and privacy rules, we are motivated to extend FL to such settings. As workers can contribute to multiple groups, complexities arise in understanding privacy leakage and in adhering to privacy policies. In this paper, we propose differential private overlapping grouped learning (DP-OGL), which shares learning across groups through common workers. We derive formal privacy guarantees between every pair of workers under the honest-but-curious threat model with multiple group memberships. Our experiments show that DP-OGL improves privacy-utility trade-offs compared to a baseline FL system.
Resources