MO2.R4.2

Reinforcement Learning for Near-Optimal Design of Zero-Delay Codes for Markov Sources

Liam Cregg, Tamas Linder, Serdar Yuksel, Queen's University, Canada

Session:
Lossy Compression Applications

Track:
10: Source Coding and Data Compression

Location:
Omikron II

Presentation Time:
Mon, 8 Jul, 12:10 - 12:30

Session Chair:
Nir Weinberger, Technion - Israel Institute of Technology
Abstract
In the classical lossy source coding problem, one encodes long blocks of source symbols that enables the distortion to approach the ultimate Shannon limit. This approach is undesirable in many delay-sensitive applications. We consider the zero-delay case, where the goal is to encode and decode a finite-alphabet Markov source without any delay. It has been shown that this problem lends itself to stochastic control techniques, which lead to existence, structural, and approximation results. However, these techniques have only resulted in computationally prohibitive algorithms for code design. We present a practical reinforcement learning design algorithm and rigorously prove its asymptotic optimality. In particular, we show that a quantized Q-learning algorithm can be used to obtain a near-optimal coding policy for this problem. The proof builds on recent results on quantized Q-learning for weak Feller controlled Markov chains whose application necessitates the development of supporting technical results on regularity and stability properties, and relating the solutions for discounted and average cost criteria problems. These theoretical results are supported by simulations.
Resources