MO4.R2.4

Robust VAEs via Generating Process of Noise Augmented Data

Hiroo Irobe, Wataru Aoki, Tokyo Institute of Technology, Japan; Kimihiro Yamazaki, Fujitsu, Japan; Yuhui Zhang, Tokyo Institute of Technology, Japan; Takumi Nakagawa, Tokyo Institute of Technology, RIKEN AIP, Japan; Hiroki Waida, Tokyo Institute of Technology, Japan; Yuichiro Wada, Fujitsu, RIKEN AIP, Japan; Takafumi Kanamori, Tokyo Institute of Technology, RIKEN AIP, Japan

Session:
Topics in Machine Learning 2

Track:
8: Machine Learning

Location:
Ypsilon I-II-III

Presentation Time:
Mon, 8 Jul, 17:25 - 17:45

Session Chair:
Lalitha Sankar, Arizona State University
Abstract
Advancing defensive mechanisms against adversarial attacks in generative models is a critical research topic in machine learning. Our study focus on a specific type of generative models - Variational Auto-Encoders (VAEs). Contrary to common beliefs and existing literature which suggest that noise injection towards training data can make models more robust, our preliminary experiments revealed that naive usage of noise augmentation technique did not substantially improve VAE robustness. In fact, it even degraded the quality of learned representations, making VAEs more susceptible to adversarial perturbations. This paper introduces a novel framework that enhances robustness by regu- larizing the latent space divergence between original and noise- augmented data. Through incorporating a paired probabilistic prior into the standard variational lower bound, our method significantly boosts defense against adversarial attacks. Our empirical evaluations demonstrate that this approach, termed Robust Augmented Variational Auto-ENcoder (RAVEN), yields superior performance in resisting adversarial inputs on widely- recognized benchmark datasets.
Resources