Technical Program

Paper Detail

Paper IDF-3-3.8
Paper Title Acoustic Echo Cancellation Based on Recurrent Neural Network
Authors Yao Cheng Tsai, Kai Wen Liang, Pao Chi Chang, National Central University, Taiwan
Session F-3-3: Signal Processing Systems for AI
TimeThursday, 10 December, 17:30 - 19:30
Presentation Time:Thursday, 10 December, 19:15 - 19:30 Check your Time Zone
All times are in New Zealand Time (UTC +13)
Topic Signal Processing Systems: Design and Implementation (SPS):
Abstract This work proposes an acoustic echo cancellation method using deep-learning-based speech separation techniques. Traditionally, acoustic echo cancellation (AEC) used a linear adaptive filter to identify the acoustic impulse response between the microphone and the loudspeaker. However, when conventional methods encounter nonlinear conditions, the results of the processing are not good enough. Our practice utilizes the advantages of deep-learning techniques, which are beneficial for nonlinear processings. In the adopted recurrent neural network system, we add single-talk features and assign specific weighting for each element in different from the traditional speech separation. The experimental results show that our method improves the Perceptual evaluation of speech quality (PESQ) of simulated audio, and the Echo return loss enhancement (ERLE) of recorded audio as well.