Login Paper Search My Schedule Paper Index Help

My ICASSP 2020 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)
Click on the icon to view the manuscript on IEEE XPlore in the IEEE ICASSP 2020 Open Preview.

Clicking on the Add button next to a paper title will add that paper to your custom schedule.
Clicking on the Remove button next to a paper will remove that paper from your custom schedule.

FR2.I: Signal Enhancement and Restoration II

Session Type: Poster
Time: Friday, 8 May, 11:45 - 13:45
Location: On-Demand
Session Chair: Jesper Rindom Jensen, Aalborg University
 
   FR2.I.1: CONSISTENCY-AWARE MULTI-CHANNEL SPEECH ENHANCEMENT USING DEEP NEURAL NETWORKS
         Yoshiki Masuyama; Waseda University
         Masahito Togami; LINE Corporation
         Tatsuya Komatsu; LINE Corporation
 
   FR2.I.2: PHASE RECONSTRUCTION BASED ON RECURRENT PHASE UNWRAPPING WITH DEEP NEURAL NETWORKS
         Yoshiki Masuyama; Waseda University
         Kohei Yatabe; Waseda University
         Yuma Koizumi; NTT Corporation
         Yasuhiro Oikawa; Waseda University
         Noboru Harada; NTT Corporation
 
   FR2.I.3: PERFORMANCE STUDY OF A CONVOLUTIONAL TIME-DOMAIN AUDIO SEPARATION NETWORK FOR REAL-TIME SPEECH DENOISING
         Samuel Sonning; Google
         Christian Schüldt; Google
         Hakan Erdogan; Google
         Scott Wisdom; Google
 
   FR2.I.4: CHANNEL-ATTENTION DENSE U-NET FOR MULTICHANNEL SPEECH ENHANCEMENT
         Bahareh Tolooshams; Harvard University
         Ritwik Giri; Amazon Web Services
         Andrew Song; Massachusetts Institute of Technology
         Umut Isik; Amazon Web Services
         Arvindh Krishnaswamy; Amazon Web Services
 
   FR2.I.5: A COMPOSITE DNN ARCHITECTURE FOR SPEECH ENHANCEMENT
         Yochai Yemini; Bar-Ilan University
         Shlomo E. Chazan; Bar-Ilan University
         Jacob Goldberger; Bar-Ilan University
         Sharon Gannot; Bar-Ilan University
 
   FR2.I.6: GEOMETRICALLY CONSTRAINED INDEPENDENT VECTOR ANALYSIS FOR DIRECTIONAL SPEECH ENHANCEMENT
         Li Li; University of Tsukuba
         Kazuhito Koishida; Microsoft Corporation
 
   FR2.I.7: REAL-TIME SPEECH ENHANCEMENT USING EQUILIBRIATED RNN
         Daiki Takeuchi; Waseda University
         Kohei Yatabe; Waseda University
         Yuma Koizumi; NTT Corporation
         Yasuhiro Oikawa; Waseda University
         Noboru Harada; NTT Corporation
 
   FR2.I.8: SUBSPACE-BASED SPEECH CORRELATION VECTOR ESTIMATION FOR SINGLE-MICROPHONE MULTI-FRAME MVDR FILTERING
         Dörte Fischer; University of Oldenburg
         Simon Doclo; University of Oldenburg
 
  FR2.I.9: SPEECH ENHANCEMENT USING A TWO-STAGE NETWORK FOR AN EFFICIENT BOOSTING STRATEGY
         Juntae Kim; KaKao
 
   FR2.I.10: TIME-FREQUENCY LOSS FOR CNN BASED SPEECH SUPER-RESOLUTION
         Heming Wang; Ohio State University
         Deliang Wang; Ohio State University
 
   FR2.I.11: TIME-DOMAIN NEURAL NETWORK APPROACH FOR SPEECH BANDWIDTH EXTENSION
         Xiang Hao; Northwestern Polytechnical University
         Chenglin Xu; Nanyang Technological University
         Nana Hou; Nanyang Technological University
         Lei Xie; Northwestern Polytechnical University
         Eng Siong Chng; Nanyang Technological University
         Haizhou Li; National University of Singapore
 
   FR2.I.12: WEIGHTED SPEECH DISTORTION LOSSES FOR NEURAL-NETWORK-BASED REAL-TIME SPEECH ENHANCEMENT
         Yangyang Xia; Carnegie Mellon University
         Sebastian Braun; Microsoft Research
         Chandan Reddy; Microsoft Corporation
         Harishchandra Dubey; Microsoft Corporation
         Ross Cutler; Microsoft Corporation
         Ivan Tashev; Microsoft Research