Technical Program

Paper Detail

Paper IDE-3-1.3
Paper Title Multi-channel Speech Separation Using Deep Embedding With Multilayer Bootstrap Networks
Authors Ziye Yang, Northwestern Polytechnical University, China; Xiao-Lei Zhang, Zhonghua Fu, Northwestern Polytechnic University, China
Session E-3-1: Speech Separation 1
TimeThursday, 10 December, 12:30 - 14:00
Presentation Time:Thursday, 10 December, 13:00 - 13:15 Check your Time Zone
All times are in New Zealand Time (UTC +13)
Topic Speech, Language, and Audio (SLA):
Abstract Recently, deep clustering (DPCL) based speaker-independent speech separation has drawn much attention, since it needs little speaker prior information. However, it still has much room of improvement, particularly in reverberant environments. If the training and test environments mismatch which is a common case, the embedding vectors produced by DPCL may contain much noise and many small variations. To deal with the problem, we propose a variant of DPCL, named MDPCL, by applying a recent unsupervised deep learning method---{multilayer bootstrap networks} (MBN)---to further reduce the noise and small variations of the embedding vectors in an unsupervised way in the test stage, which fascinates \textit{k}-means to produce a good result. MBN builds a gradually narrowed network from bottom-up via a stack of \textit{k}-centroids clustering ensembles, where the \textit{k}-centroids clusterings are trained independently by random sampling and one-nearest-neighbor optimization. To further improve the robustness of MDPCL in reverberant environments, we take spatial features as part of its input. Experimental results demonstrate the effectiveness of the proposed method.