TH3.R3.4

Robust Distributed Gradient Descent to Corruption over Noisy Channels

Shuche Wang, Vincent Y. F. Tan, National University of Singapore, Singapore

Session:
Secure Federated Learning

Track:
15: Distributed and Federated Learning

Location:
Ypsilon IV-V-VI

Presentation Time:
Thu, 11 Jul, 15:35 - 15:55

Session Chair:
Namrata Vaswani, Iowa State University
Abstract
Distributed gradient descent has attracted attention in modern machine learning, especially for handling large datasets. Less focus has been given to the distributed gradient descent where the partial gradient in each worker is subject to adversarial corruption instead of random noise. In this paper, we explore the challenges of this adversarial setting and propose a distributed gradient descent algorithm, focusing on the robustness against adversarial corruption and noises during model transmission. Furthermore, we derive bounds on the error rates for both non-strongly convex and strongly convex loss functions.
Resources