MO3.R3.3

Utilitarian Privacy and Private Sampling

Aman Bansal, Stanford University, United States; Rahul Chunduru, University of Wisconsin, Madison, United States; Deepesh Data, University of California, Los Angeles, United States; Manoj Prabhakaran, Indian Institute of Technology Bombay, India

Session:
Differential Privacy in Learning 1

Track:
16: Privacy and Fairness

Location:
Ypsilon IV-V-VI

Presentation Time:
Mon, 8 Jul, 15:15 - 15:35

Session Chair:
Oliver Kosut, Arizona State University
Abstract
Differential Privacy (DP) has become a gold standard in privacy-preserving data analysis. While it provides a rigorous notion of privacy, there are settings where its applicability is limited. In this work, we introduce a new notion of privacy, called \emph{Utilitarian Privacy} (UP), that complements DP. Informally, a UP mechanism is required not to include any ``non-utile information'' in the output. In particular, if two databases result in ``close-by'' outputs, then the mechanism should not allow distinguishing between them. On one hand UP permits weaker privacy guarantees when distinguishing between neighboring databases is important for utility; on the other hand, UP gives stronger privacy guarantees by making even non-neighboring databases indistinguishable from each other, if they yield close-by outcomes. We show that for real-valued functions, adding appropriately calibrated Laplace noise to the output, remarkably, achieves UP guarantees. A separate contribution of this work is to study \emph{private sampling}, by extending the accuracy notion of mechanisms to sampling tasks. We show that for real-valued random variables, adding Laplace noise, calibrated according to a {\em generalized} sensitivity measure of the output distribution yields DP and UP. Both the above extensions build on a recently introduced notion of ``lossy Wasserstein distance'' -- a 2-parameter error measure for distributions.
Resources