TH3.R3.1

VALID: a Validated Algorithm for Learning in Decentralized Networks with Possible Adversarial Presence

Mayank Bakshi, Arizona State University, United States; Sara Ghasvarianjahromi, Yauhen Yakimenka, New Jersey Institute of Technology, United States; Allison Beemer, University of Wisconsin-Eau Claire, United States; Oliver Kosut, Arizona State University, United States; Joerg Kliewer, New Jersey Institute of Technology, United States

Session:
Secure Federated Learning

Track:
15: Distributed and Federated Learning

Location:
Ypsilon IV-V-VI

Presentation Time:
Thu, 11 Jul, 14:35 - 14:55

Session Chair:
Namrata Vaswani, Iowa State University
Abstract
We introduce the paradigm of "validated decentralized learning" for undirected networks with heterogeneous data and possible adversarial infiltration. We require (a) convergence to a global empirical loss minimizer when adversaries are absent, and (b) either detection of adversarial presence or convergence to an admissible consensus model in their presence. This contrasts sharply with the traditional byzantine-robustness requirement of convergence to an admissible consensus irrespective of the adversarial configuration. To this end, we propose the VALID protocol which, to the best of our knowledge, is the first to achieve a validated learning guarantee. Moreover, VALID offers an O(1/T) convergence rate (under pertinent regularity assumptions), and computational and communication complexities comparable to non-adversarial distributed stochastic gradient descent. Remarkably, VALID retains optimal performance metrics in adversary-free environments, sidestepping the robustness penalties observed in prior byzantine-robust methods. A distinctive aspect of our study is a heterogeneity metric based on the norms of individual agents' gradients computed at the global empirical loss minimizer. This not only provides a natural statistic for detecting significant byzantine disruptions but also allows us to prove the optimality of VALID in wide generality. Lastly, our numerical results reveal that, in absence of adversaries, VALID converges faster than state-of-the-art byzantine robust algorithms, while when adversaries are present, VALID terminates with each honest agent either converging to an admissible consensus or declaring adversarial presence in the network.
Resources