Technical Program

Paper Detail

Paper IDB-2-3.2
Paper Title DETECTION OF CLONED RECOGNIZERS: A DEFENDING METHOD AGAINST RECOGNIZER CLONING ATTACK
Authors Yuto Mori, Kazuaki Nakamura, Naoko Nitta, Noboru Babaguchi, Osaka University, Japan
Session B-2-3: Deep Generative Models for Media Clones and Its Detection
TimeWednesday, 09 December, 17:15 - 19:15
Presentation Time:Wednesday, 09 December, 17:30 - 17:45 Check your Time Zone
All times are in New Zealand Time (UTC +13)
Topic Multimedia Security and Forensics (MSF): Special Session: Deep Generative Models for Media Clones and Its Detection
Abstract With the development of machine learning technologies and the spread of mobile terminals, cloud-based image recognition services are getting popular. However, these services might suffer from a new type of attack called “recognizer cloning attack” (RCA), in which an attacker sends many images to a recognition server and receives their recognition results to train a new recognizer that mimics the function of the server’s original recognizer. We refer to the recognizers trained by RCA as “cloned recognizers” (CR). CRs allow attackers to analyze the weakness of their original recognizer and cause serious damage to the providers of the original service. To defend against RCA, we propose a method for detecting CRs in this paper. Our proposed method receives two recognizers as input and discriminates whether one of them is a CR of the other or not. We experimentally analyzed the properties of CRs and got the following two findings. First, CR and its original recognizer have the almost same recognition boundary. Second, CR provides almost same or quite higher confidence score than its original recognizer. Using these properties as clues, the proposed method was able to detect CRs with an accuracy of more than 80% in our experiments.