Training Generative Models from Privatized Data via Entropic Optimal Transport
Daria Reshetova, Wei-Ning Chen, Ayfer Ozgur, Stanford University, United States
Session:
Differential Privacy in Learning 2
Track:
16: Privacy and Fairness
Location:
Ypsilon IV-V-VI
Presentation Time:
Mon, 8 Jul, 17:05 - 17:25
Session Chair:
Ayfer Ozgur, Stanford University
Abstract
Local differential privacy is a powerful method for privacy-preserving data collection. In this paper, we develop a framework for training Generative Adversarial Networks (GANs) on differentially privatized data. We show that entropic regularization of optimal transport -- a popular regularization method in the literature that has often been leveraged for its computational benefits -- enables the generator to learn the raw (unprivatized) data distribution even though it only has access to privatized samples. We prove that at the same time this leads to fast statistical convergence at the parametric rate. This shows that entropic regularization of optimal transport uniquely enables the mitigation of both the effects of privatization noise and the curse of dimensionality in statistical convergence. The omitted proofs can be found in the full version of the paper https://arxiv.org/abs/2306.09547.