Technical Program

Paper Detail

Paper IDE-2-1.5
Paper Title VAW-GAN FOR SINGING VOICE CONVERSION WITH NON-PARALLEL TRAINING DATA
Authors Junchen Lu, Kun Zhou, National University of Singapore, Singapore; Berrak Sisman, Singapore University of Technology and Design, Singapore; Haizhou Li, National University of Singapore, Singapore
Session E-2-1: Music Information Processing 2, Voice Conversion
TimeWednesday, 09 December, 12:30 - 14:00
Presentation Time:Wednesday, 09 December, 13:30 - 13:45 Check your Time Zone
All times are in New Zealand Time (UTC +13)
Topic Speech, Language, and Audio (SLA):
Abstract Singing voice conversion aims to convert singer's voice from source to target without changing singing content. Parallel training data is typically required for the training of singing voice conversion system, that is however not practical in real-life applications. Recent encoder-decoder structures, such as variational autoencoding Wasserstein generative adversarial network (VAW-GAN), provide an effective way to learn a mapping through non-parallel training data. In this paper, we propose a singing voice conversion framework that is based on VAW-GAN. We train an encoder to disentangle singer identity and singing prosody (F0 contour) from phonetic content. By conditioning on singer identity and F0, the decoder generates output spectral features with unseen target singer identity, and improves the F0 rendering. Experimental results show that the proposed framework achieves better performance than the baseline frameworks.