TH1.R3.3

Federated Learning for Heterogeneous Bandits with Unobserved Contexts

Jiabin Lin, Shana Moothedath, Iowa State University, United States

Session:
Multi-Armed Bandits 1

Track:
8: Bandits

Location:
Ypsilon IV-V-VI

Presentation Time:
Thu, 11 Jul, 10:25 - 10:45

Session Chair:
Ali Tajer, Rensselaer Polytechnic Institute
Abstract
We study the federated stochastic multi-arm contextual bandits with unknown contexts, in which M agents face M different bandit problems and collaborate to learn. The communication model consists of a central server and M agents, and the agents share their estimates with the central server periodically to learn to choose optimal actions to minimize the total regret. We assume that the exact contexts are not observable, and the agents observe only a distribution of contexts. Such a situation arises, for instance, when the context itself is a noisy measurement or based on a prediction mechanism. Our goal is to develop a distributed and federated algorithm that facilitates collaborative learning among the agents to select a sequence of optimal actions to maximize the cumulative reward. By performing a feature vector transformation, we propose an elimination-based algorithm and prove the regret bound for linearly parametrized reward functions. Finally, we validated the performance of our algorithm and compared it with another baseline approach using numerical simulations on synthetic data and the real-world movielens dataset.
Resources