Milad Sefidgaran, Paris Research Center, Huawei Technologies France, France; Abdellatif Zaidi, Université Gustave Eiffel, France, France; Piotr Krasnowski, Paris Research Center, Huawei Technologies France, France
Session:
Statistical Learning
Track:
8: Machine Learning
Location:
Ballroom II & III
Presentation Time:
Tue, 9 Jul, 10:45 - 11:05
Session Chair:
Meir Feder, Tel-Aviv University
Abstract
A client device which has access to $n$ training data samples needs to obtain a statistical hypothesis or model $W$ and then to send it to a remote server. The client and the server devices share some common randomness sequence as well as a prior on the hypothesis space. In this problem a suitable hypothesis or model $W$ should meet two distinct design criteria simultaneously: (i) small (population) risk during inference phase and (ii) small `complexity' for it to be conveyed to the server with minimum communication cost. In this paper, we propose a joint training and source coding scheme with provable in-expectation guarantees, where the expectation is over the encoder's output message. Specifically, we show that by imposing a constraint on a suitable Kullback-Leibler divergence between the conditional distribution induced by a compressed learning model $\widehat{W}$ given $W$ and the prior, one guarantees simultaneously small average empirical risk (aka training loss), small average generalization error and small average communication cost. We also consider a one-shot scenario in which the guarantees on the empirical risk and generalization error are obtained for every encoder's output message.