Login Paper Search My Schedule Paper Index Help

My ICASSP 2020 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)
Click on the icon to view the manuscript on IEEE XPlore in the IEEE ICASSP 2020 Open Preview.

Clicking on the Add button next to a paper title will add that paper to your custom schedule.
Clicking on the Remove button next to a paper will remove that paper from your custom schedule.

WE3.I: Topics in Audio Analysis and Classification

Session Type: Poster
Time: Wednesday, 6 May, 16:30 - 18:30
Location: On-Demand
Session Chair: Ina Kodrasi, Idiap Research Institute
 
   WE3.I.1: IMPACT OF A SHIFT-INVARIANT HARMONIC PHASE MODEL IN FULLY PARAMETRIC HARMONIC VOICE REPRESENTATION AND TIME/FREQUENCY SYNTHESIS
         Aníbal Ferreira; University of Porto
         João Silva; University of Porto
         Francisca Brito; University of Porto
         Deepen Sinha; ATC Labs
 
   WE3.I.2: HEARING AID RESEARCH DATA SET FOR ACOUSTIC ENVIRONMENT RECOGNITION
         Andreas Hüwel; HoerTech gGmbH
         Kamil Adiloğlu; HörTech gGmbH
         Jörg-Hendrik Bach; HoerTech gGmbH
 
   WE3.I.3: AUDIO FEATURE EXTRACTION FOR VEHICLE ENGINE NOISE CLASSIFICATION
         Luca Becker; Ruhr-Universität Bochum
         Alexandru Nelus; Ruhr-Universität Bochum
         Johannes Gauer; Ruhr-Universität Bochum
         Lars Rudolph; Ruhr-Universität Bochum
         Rainer Martin; Ruhr-Universität Bochum
 
   WE3.I.4: TIME-FREQUENCY FEATURE DECOMPOSITION BASED ON SOUND DURATION FOR ACOUSTIC SCENE CLASSIFICATION
         Yuzhong Wu; Chinese University of Hong Kong
         Tan Lee; Chinese University of Hong Kong
 
   WE3.I.5: VGGSOUND: A LARGE-SCALE AUDIO-VISUAL DATASET
         Honglie Chen; University of Oxford
         Weidi Xie; University of Oxford
         Andrea Vedaldi; University of Oxford
         Andrew Zisserman; University of Oxford
 
   WE3.I.6: TRANSFER LEARNING FROM YOUTUBE SOUNDTRACKS TO TAG ARCTIC ECOACOUSTIC RECORDINGS
         Enis Berk Çoban; Graduate Center, City University of New York
         Dara Pir; Guttman Community College, CUNY
         Richard So; Staten Island Technical High School
         Michael I Mandel; Brooklyn College, City University of New York
 
   WE3.I.7: DATA AUGMENTATION USING EMPIRICAL MODE DECOMPOSITION ON NEURAL NETWORKS TO CLASSIFY IMPACT NOISE IN VEHICLE
         Gue-Hwan Nam; Hyundai Mobis
         Seok-Jun Bu; Yonsei University
         Na-Mu Park; Yonsei University
         Jae-Yong Seo; Hyundai Mobis
         Hyeon-Cheol Jo; Hyundai Mobis
         Won-Tae Jeong; Hyundai Mobis
 
   WE3.I.8: CLOTHO: AN AUDIO CAPTIONING DATASET
         Konstantinos Drossos; Tampere University
         Samuel Lipping; Tampere University
         Tuomas Virtanen; Tampere University
 
   WE3.I.9: ROBUST FUNDAMENTAL FREQUENCY ESTIMATION IN COLOURED NOISE
         Alfredo Esquivel Jaramillo; Aalborg University
         Andreas Jakobsson; Lund Universitity
         Jesper Kjær Nielsen; Aalborg University
         Mads Græsbøll Christensen; Aalborg University
 
   WE3.I.10: EFFICIENT BIRD SOUND DETECTION ON THE BELA EMBEDDED SYSTEM
         Alexandru-Marius Solomes; Queen Mary University of London
         Dan Stowell; Queen Mary University of London
 
   WE3.I.11: IMPROVING AUTOMATED SEGMENTATION OF RADIO SHOWS WITH AUDIO EMBEDDINGS
         Oberon Berlage; University of Amsterdam
         Klaus-Michael Lux; Radboud Universiteit Nijmegen
         David Graus; FD Mediagroep
 
   WE3.I.12: SECL-UMONS DATABASE FOR SOUND EVENT CLASSIFICATION AND LOCALIZATION
         Mathilde Brousmiche; Université de Mons
         Jean Rouat; Université de Sherbrooke
         Stéphane Dupont; Université de Mons