TU2.P1.5
Accurate delayed source model for multi-frame Full-rank Spatial Covariance Analysis
Shinya Furunaga, Waseda Unversity, Japan; Hiroshi Sawada, Rintaro Ikeshita, Tomohiro Nakatani, NTT Corporation, Japan; Shoji Makino, Waseda Unversity, Japan
Session:
TU2.P1: Poster Session III: Source separation, Acoustic event detection and scene classification Poster
Track:
Acoustic echo and feedback suppression
Location:
Nedre Foyer
Presentation Time:
Tue, 10 Sep, 15:30 - 17:30 Central European Time (UTC +1)
Session Chair:
Joerg Bitzer, Jade-University of Applied Sciences
Presentation
Discussion
Resources
No resources available.
Session TU2.P1
TU2.P1.1: AN EFFECTIVE MVDR POST-PROCESSING METHOD FOR LOW-LATENCY CONVOLUTIVE BLIND SOURCE SEPARATION
Jiawen Chua, Eigenspace Gmbh, China; Longfei Yan, W. Bastiaan Kleijn, Victoria University of Wellington, New Zealand
TU2.P1.2: Split-Attention Mechanisms with Graph Convolutional Network for Multi-Channel Speech Separation
YingWei Tan, XueFeng Ding, Volkswagen-Mobvoi (Beijing) Information Technology Co., Ltd, China
TU2.P1.3: LOW-LATENCY SINGLE-MICROPHONE SPEAKER SEPARATION WITH TEMPORAL CONVOLUTIONAL NETWORKS USING SPEAKER REPRESENTATIONS
Boris Rubenchik, Elior Hadad, Eli Tzirkel, General Motors Technical Center, Israel, Israel; Ethan Fetaya, Sharon Gannot, Bar-Ilan University, Israel
TU2.P1.4: ESTIMATION OF OUTPUT SI-SDR OF SPEECH SIGNALS SEPARATED FROM NOISY INPUT BY CONV-TASNET
Satoru Emura, Kyoto university of advanced science, Japan
TU2.P1.5: Accurate delayed source model for multi-frame Full-rank Spatial Covariance Analysis
Shinya Furunaga, Waseda Unversity, Japan; Hiroshi Sawada, Rintaro Ikeshita, Tomohiro Nakatani, NTT Corporation, Japan; Shoji Makino, Waseda Unversity, Japan
TU2.P1.6: EFFICIENT AREA-BASED AND SPEAKER-AGNOSTIC SOURCE SEPARATION
Martin Strauss, International Audio Laboratories Erlangen, Germany; Okan Köpüklü, Microsoft, Germany
TU2.P1.7: A Unified Approach to Speaker Separation and Target Speaker Extraction Using Encoder-Decoder Based Attractors
Srikanth Raj Chetupalli, Habets Emanuël A. P., International Audio Laboratories, Germany
TU2.P1.8: TF-LOCOFORMER: TRANSFORMER WITH LOCAL MODELING BY CONVOLUTION FOR SPEECH SEPARATION AND ENHANCEMENT
Kohei Saijo, Gordon Wichern, François G. Germain, Zexu Pan, Jonathan Le Roux, Mitsubishi Electric Research Laboratories, United States of America
TU2.P1.9: INTERAURAL TIME DIFFERENCE LOSS FOR BINAURAL TARGET SOUND EXTRACTION
Carlos Hernandez Olivan, Marc Delcroix, Tsubasa Ochiai, Naohiro Tawara, Tomohiro Nakatani, Shoko Araki, NTT Corporation, Japan
TU2.P1.10: Robustness of Speech Separation Models for Similar-pitch Speakers
Bunlong Lay, Sebastian Zaczek, Kristina Tesch, Timo Gerkmann, Universität Hamburg, Germany
TU2.P1.12: Multi-label audio classification with a noisy zero-shot teacher
Sebastian Braun, Hannes Gamper, Microsoft Research, United States of America
TU2.P1.13: MULTI-LABEL ZERO-SHOT AUDIO CLASSIFICATION WITH TEMPORAL ATTENTION
Duygu Dogan, Huang Xie, Toni Heittola, Tuomas Virtanen, Tampere University, Finland