Login Paper Search My Schedule Paper Index Help

My ICASSP 2020 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)
Click on the icon to view the manuscript on IEEE XPlore in the IEEE ICASSP 2020 Open Preview.

Clicking on the Add button next to a paper title will add that paper to your custom schedule.
Clicking on the Remove button next to a paper will remove that paper from your custom schedule.

TH2.I: Speech Separation and Extraction III

Session Type: Poster
Time: Thursday, 7 May, 11:30 - 13:30
Location: On-Demand
Session Chairs: Hakan Erdogan, Google and Marc Delcroix, NTT
 
   TH2.I.1: AN EMPIRICAL STUDY OF CONV-TASNET
         Berkan Kadioglu; Northeastern University
         Michael Horgan; Dolby Laboratories
         Xiaoyu Liu; Dolby Laboratories
         Jordi Pons; Dolby Laboratories
         Dan Darcy; Dolby Laboratories
         Vivek Kumar; Dolby Laboratories
 
   TH2.I.2: MASK-DEPENDENT PHASE ESTIMATION FOR MONAURAL SPEAKER SEPARATION
         Zhaoheng Ni; Graduate Center, City University of New York
         Michael I Mandel; Brooklyn College, City University of New York
 
   TH2.I.3: JOINT PHONEME ALIGNMENT AND TEXT-INFORMED SPEECH SEPARATION ON HIGHLY CORRUPTED SPEECH
         Kilian Schulze-Forster; LTCI, Télécom Paris, Institut Polytechnique de Paris
         Clement S. J. Doire; Audionamix
         Gaël Richard; LTCI, Télécom Paris, Institut Polytechnique de Paris
         Roland Badeau; LTCI, Télécom Paris, Institut Polytechnique de Paris
 
   TH2.I.4: SINGLE-CHANNEL SPEECH SEPARATION INTEGRATING PITCH INFORMATION BASED ON A MULTI TASK LEARNING FRAMEWORK
         Xiang Li; Peking University
         Rui Liu; Peking University
         Tao Song; Peking University
         Xihong Wu; Peking University
         Jing Chen; Peking University
 
   TH2.I.5: CONTINUOUS SPEECH SEPARATION: DATASET AND ANALYSIS
         Zhuo Chen; Microsoft
         Takuya Yoshioka; Microsoft
         Liang Lu; Microsoft
         Tianyan Zhou; Microsoft
         Zhong Meng; Microsoft
         Yi Luo; Microsoft
         Jian Wu; Microsoft
         Xiong Xiao; Microsoft
         Jinyu Li; Microsoft
 
   TH2.I.6: THE SOUND OF MY VOICE: SPEAKER REPRESENTATION LOSS FOR TARGET VOICE SEPARATION
         Seongkyu Mun; Naver Corporation
         Soyeon Choe; Naver Corporation
         Jaesung Huh; Naver Corporation
         Joon Son Chung; Naver Corporation
 
   TH2.I.7: SPEAKER-AWARE TARGET SPEAKER ENHANCEMENT BY JOINTLY LEARNING WITH SPEAKER EMBEDDING EXTRACTION
         Xuan Ji; Tencent
         Meng Yu; Tencent
         Chunlei Zhang; Tencent
         Dan Su; Tencent
         Tao Yu; Tencent
         Xiaoyu Liu; Tencent
         Dong Yu; Tencent
 
   TH2.I.8: FAR-FIELD LOCATION GUIDED TARGET SPEECH EXTRACTION USING END-TO-END SPEECH RECOGNITION OBJECTIVES
         Aswin Shanmugam Subramanian; Johns Hopkins University
         Chao Weng; Tencent AI
         Meng Yu; Tencent AI
         Shi-Xiong Zhang; Tencent AI Lab
         Yong Xu; Tencent AI
         Shinji Watanabe; Johns Hopkins University
         Dong Yu; Tencent AI
 
   TH2.I.9: A STUDY OF CHILD SPEECH EXTRACTION USING JOINT SPEECH ENHANCEMENT AND SEPARATION IN REALISTIC CONDITIONS
         Xin Wang; University of Science and Technology of China
         Jun Du; University of Science and Technology of China
         Alejandrina Cristia; Laboratoire de Sciences Cognitives et Psycholinguistique
         Lei Sun; University of Science and Technology of China
         Chin-Hui Lee; Georgia Institute of Technology
 
   TH2.I.10: AN ANALYSIS OF SPEECH ENHANCEMENT AND RECOGNITION LOSSES IN LIMITED RESOURCES MULTI-TALKER SINGLE CHANNEL AUDIO-VISUAL ASR
         Luca Pasa; University of Padova
         Giovanni Morrone; University of Modena and Reggio Emilia
         Leonardo Badino; Istituto Italiano di Tecnologia (IIT)
 
   TH2.I.11: DEEP AUDIO-VISUAL SPEECH SEPARATION WITH ATTENTION MECHANISM
         Chenda Li; Shanghai Jiao Tong University
         Yanmin Qian; Shanghai Jiao Tong University
 
   TH2.I.12: ENHANCING END-TO-END MULTI-CHANNEL SPEECH SEPARATION VIA SPATIAL FEATURE LEARNING
         Rongzhi Gu; Peking University Shenzhen Graduate School
         Shi-Xiong Zhang; Tencent AI Lab
         Lianwu Chen; Tencent
         Yong Xu; Tencent
         Meng Yu; Tencent
         Dan Su; Tencent
         Yuexian Zou; Peking University Shenzhen Graduate School
         Dong Yu; Tencent