Technical Program

Paper Detail

Paper IDF-3-3.7
Paper Title PROCESSING ELEMENT ARCHITECTURE DESIGN FOR DEEP REINFORCEMENT LEARNING WITH FLEXIBLE BLOCK FLOATING POINT EXPLOITING SIGNAL STATISTICS
Authors Juyn-Da Su, Pei-Yun Tsai, National Central University, Taiwan
Session F-3-3: Signal Processing Systems for AI
TimeThursday, 10 December, 17:30 - 19:30
Presentation Time:Thursday, 10 December, 19:00 - 19:15 Check your Time Zone
All times are in New Zealand Time (UTC +13)
Topic Signal Processing Systems: Design and Implementation (SPS):
Abstract Deep reinforcement learning is a technique that allows the agent to have evolving learning capability for unknown environments and thus has the potential to surpass human expertise. The hardware architecture for DRL supporting on-line Q-learning and on-line training is presented in this paper. Two processing element (PE) arrays are used for handling evaluation network and target network respectively. Through configuration of two modes for PE operations, all required forward and backward computations can be accomplished and the number of processing cycles can be derived. Due to the precision required for on-line Q-learning and training, we propose flexible block floating-point (FBFP) to reduce the overhead of floating-point adders. The FBFP exploits different signal statistics during the learning process. Furthermore, the respective block exponents of gradients are adjusted following the variation of temporal-difference (TD) error to reserve resolution. From the simulation results, the FBFP multiplier-and-accumulator (MAC) can reduce 15.8% of complexity compared to FP MAC while good learning performance can be maintained.