MO3.R3.1

On the Privacy Guarantees of Differentially Private Stochastic Gradient Descent

Shahab Asoodeh, McMaster University, Canada; Mario Diaz, Universidad Nacional Autónoma de México, Mexico

Session:
Differential Privacy in Learning 1

Track:
16: Privacy and Fairness

Location:
Ypsilon IV-V-VI

Presentation Time:
Mon, 8 Jul, 14:35 - 14:55

Session Chair:
Oliver Kosut, Arizona State University
Abstract
Differentially Private Stochastic Gradient Descent (DP-SGD) is a widely adopted algorithm for privately training machine learning models. An inherent feature of this algorithm is the incorporation of gradient clipping to counteract the influence of individual samples during training. Nevertheless, the introduction of gradient clipping also introduces non-convexity into the problem, rendering it challenging to derive upper bounds on the privacy loss. In this paper, we establish effective upper bounds for the privacy loss of both projected DP-SGD and regularized DP-SGD, without relying on convexity or smoothness assumptions regarding the loss function. Our approach involves a direct analysis of the hockey-stick divergence between coupled stochastic processes through the application of non-linear data processing inequalities.
Resources