SLP-P26.1

EVALUATING EMOTION RECOGNITION IN SPOKEN LANGUAGE MODELS ON EMOTIONALLY INCONGRUENT SPEECH

Pedro Correa, João Lima, Victor Moreno, Lucas Ueda, Paula Costa, Universidade Estadual de Campinas (Unicamp), Brazil

Session:
SLP-P26: Multimodal Emotion Recognition and Healthcare Applications Poster

Track:
Speech and Language Processing [SL]

Location:
Poster Area 28

Presentation Time:
Wed, 6 May, 16:30 - 18:30

Presentation
Discussion
Resources
No resources available.
Session SLP-P26
SLP-P26.1: EVALUATING EMOTION RECOGNITION IN SPOKEN LANGUAGE MODELS ON EMOTIONALLY INCONGRUENT SPEECH
Pedro Correa, João Lima, Victor Moreno, Lucas Ueda, Paula Costa, Universidade Estadual de Campinas (Unicamp), Brazil
SLP-P26.2: WHEN AUDIO MATTERS: A LIGHTWEIGHT, HIERARCHICAL FUSION MODEL FOR SPEECH AND NON-VERBAL EMOTION RECOGNITION
Alkis Koudounas, Politecnico di Torino, Italy; Moreno La Quatra, Kore University of Enna, Italy; Elena Baralis, Politecnico di Torino, Italy
SLP-P26.3: INCONVAD: A TWO-STAGE DUAL-TOWER FRAMEWORK FOR MULTIMODAL EMOTION INCONSISTENCY DETECTION
Zongyi Li, Nanyang Technological University, Singapore; Junchuan Zhao, National University of Singapore, Singapore; Francis Bu Sung Lee, Andrew Zi Han Yee, Nanyang Technological University, Singapore
SLP-P26.4: TVP-UNET: THRESHOLD VARIANCE PENALTY U-NET FOR VOICE ACTIVITY DETECTION IN DYSARTHRIC SPEECH
Aditya Pandey, Vellore Institute of Technology, India; Tanuka Bhattacharjee, Indian Institute of Science, India; Madassu Keerthipriya, Darshan Chikktimmegowda, Dipti Baskar, Yamini BK, Seena Vengalil, Atchayaram Nalini, Ravi Yadav, National Institute of Mental Health and Neurosciences, India; Prasanta Kumar Ghosh, Indian Institute of Science, India
SLP-P26.5: Tensorformer-Based Multimodal Depression Detection from Concurrent Gait Patterns and Physiological Signals
Changzeng Fu, Huizu Lin, Shengfan Liu, Shicong Huang, Kaifeng Su, Mingyan Huo, Sichen Liu, Kexin Yan, Shiqi Zhao, Zhigang Liu, SSTC, Northeastern University, China
SLP-P26.6: WHEN CHILDREN TALK AND MACHINES LISTEN: TOWARD AN INTERPRETABLE SPEECH-BASED SCREENER FOR DUTCH DEVELOPMENTAL LANGUAGE DISORDER
Elio Stasica, Université de Lorraine, CNRS, Inria, France; Charlotte Pouw, University of Amsterdam, Netherlands; Louis Berard, Università Cattolica del Sacro Cuore, Italy; Willemijn Doedens, Royal Dutch Auris Group, Netherlands; Vincent P. Martin, Université de Lorraine, CNRS, Inria, France
SLP-P26.7: CONDITIONAL DIFFUSION MODELS FOR MENTAL HEALTH-PRESERVING VOICE CONVERSION
Siddharth Kalyanasundaram, Theodora Chaspari, University of Colorado Boulder, United States of America
SLP-P26.8: M4SER: MULTIMODAL, MULTIREPRESENTATION, MULTITASK, AND MULTISTRATEGY LEARNING FOR SPEECH EMOTION RECOGNITION
Jiajun He, Xiaohan Shi, Cheng-Hung Hu, Jinyi Mi, Nagoya University, Japan; Xingfeng Li, City University of Macau, Japan; Tomoki Toda, Nagoya University, Japan
SLP-P26.9: MSF-SER: ENRICHING ACOUSTIC MODELING WITH MULTI-GRANULARITY SEMANTICS FOR SPEECH EMOTION RECOGNITION
Haoxun Li, Yuqing Sun, Hanlei Shi, Yu Liu, Leyuan Qu, Taihao Li, Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, China
Contacts