Technical Program

Paper Detail

Paper IDF-2-1.1
Paper Title Quasi-Newton Adversarial Attacks on Speaker Verification Systems
Authors Keita Goto, Nakamasa Inoue, Tokyo Institute of Technology, Japan
Session F-2-1: Speaker Recognition 1, Language Recognition
TimeWednesday, 09 December, 12:30 - 14:00
Presentation Time:Wednesday, 09 December, 12:30 - 12:45 Check your Time Zone
All times are in New Zealand Time (UTC +13)
Topic Speech, Language, and Audio (SLA):
Abstract This paper proposes a framework for generating adversarial utterances for speaker verification systems. Our main idea is to formulate an optimization problem to generate adversarial utterances that fool speaker verification models and solve it by a second-order optimization method. We first present our algorithm, which uses the first-order Gauss-Newton method, and then extend it to second-order Quasi-Newton methods. Our experiments on the VoxCeleb 1 dataset show that the proposed method can fool a speaker verification system with a smaller degree of perturbations than those of conventional methods. We also show that second-order optimization methods are effective for finding small perturbations.