TU1.R5.2

A Distributionally Robust Approach to Shannon Limits using the Wasserstein Distance

Vikrant Malik, Taylan Kargin, Victoria Kostina, Babak Hassibi, California Institute of Technology, United States

Session:
Rate-Distortion Theory 2

Track:
9: Shannon Theory

Location:
Omikron I

Presentation Time:
Tue, 9 Jul, 10:05 - 10:25

Session Chair:
Aaron Wagner, Cornell University
Abstract
We consider the rate-distortion function for lossy source compression as well as the channel capacity for error correction through the lens of distributional robustness. We assume that the distribution of the source or of the additive channel noise is unknown and lies within a Wasserstein-2 ambiguity set of a given radius centered around a specified nominal distribution and look for the worst-case asymptotically optimal coding rate over such an ambiguity class. Varying the radius of the ambiguity set interpolates between the worst-case and stochastic scenarios using probabilistic tools. Our problem setting fits into the paradigm of compound source/channel models introduced by Sakrison and Blackwell, respectively. This paper shows that if the nominal distribution is Gaussian, then so is the worst-case source/noise distribution, and the compound rate-distortion/channel capacity functions admit convex formulations with Linear Matrix Inequality (LMI) constraints. These formulations yield simple closed-form expressions in the scalar case, offering insights into the behavior of Shannon limits with changing radius of the Wasserstein-2 ambiguity set.
Resources