Technical Program

Paper Detail

Paper IDD-1-1.4
Paper Title UNSUPERVISED DOMAIN ADVERSARIAL TRAINING IN ANGULAR SPACE FOR FACIAL EXPRESSION RECOGNITION
Authors Akihiko Takashima, Naoki Makishima, Mana Ihori, Tomohiro Tanaka, Shota Orihashi, Ryo Masumura, NTT Corporation, Japan
Session D-1-1: Image/Video Recognition
TimeTuesday, 08 December, 12:30 - 14:00
Presentation Time:Tuesday, 08 December, 13:15 - 13:30 Check your Time Zone
All times are in New Zealand Time (UTC +13)
Topic Image, Video, and Multimedia (IVM):
Abstract This paper presents unsupervised domain adversarial training in angular space (UDAT-AS), a novel unsupervised domain adversarial training method for facial expression recognition (FER). UDAT is effective as it can adapt existing neural network based classification models to the target domain by utilizing only unlabeled data sets. It is realized by forming a domain adversarial network consisting of a domain classifier and a gradient reversal layer. UDAT reduces the domain dependency of neural network based classification models by making them insensitive to domain labels. However, conventional unsupervised domain adversarial training is not suitable for FER because facial expressions strongly depend on the domain of the training data sets, e.g., race, gender, shooting environment, and pose. In order to learn domain invariance more clearly, our key advance in UDAT-AS is to perform unsupervised domain adversarial training with angular softmax loss. UDAT-AS is an extension of the domain adversarial network; its domain classifier uses angular softmax loss, a commonly utilized metric learning technique. This enables us to efficiently reduce domain bias in FER models and allow the effective use of unlabeled target domain data sets. We evaluate our approach using two different collection methods, and demonstrate that our method outperforms conventional alternatives.