MMSP-P4.1
ATTA-NET: ATTENTION AGGREGATION NETWORK FOR AUDIO-VISUAL EMOTION RECOGNITION
Ruijia Fan, Hong Liu, Peking University, China; Yidi Li, Taiyuan University of Technology, China; Peini Guo, Guoquan Wang, Ti Wang, Peking University, China
Session:
MMSP-P4: Multimodal Emotion/Sentiment Analysis Poster
Track:
Multimedia Signal Processing
Location:
Poster Zone 5B
Poster Board PZ-5B.1
Poster Board PZ-5B.1
Presentation Time:
Wed, 17 Apr, 13:10 - 15:10 (UTC +9)
Session Chair:
Zhaojun Yang, Meta, US
Session MMSP-P4
MMSP-P4.1: ATTA-NET: ATTENTION AGGREGATION NETWORK FOR AUDIO-VISUAL EMOTION RECOGNITION
Ruijia Fan, Hong Liu, Peking University, China; Yidi Li, Taiyuan University of Technology, China; Peini Guo, Guoquan Wang, Ti Wang, Peking University, China
MMSP-P4.2: Fusing Modality-Specific Representations and Decisions for Multimodal Emotion Recognition
Yu-Ping Ruan, Shoukang Han, Taihao Li, Yanfeng Wu, Zhejiang Lab, China
MMSP-P4.3: SPEAKER-CENTRIC MULTIMODAL FUSION NETWORKS FOR EMOTION RECOGNITION IN CONVERSATIONS
Biyun Yao, Wuzhen Shi, Shenzhen University, China
MMSP-P4.4: CLIP-MSA: INCORPORATING INTER-MODAL DYNAMICS AND COMMON KNOWLEDGE TO MULTIMODAL SENTIMENT ANALYSIS WITH CLIP
Qi Huang, Pingting Cai, Tanyue Nie, Jinshan Zeng, Jiangxi Normal University, China
MMSP-P4.5: GUIDED CIRCULAR DECOMPOSITION AND CROSS-MODAL RECOMBINATION FOR MULTIMODAL SENTIMENT ANALYSIS
Haijian Liang, Weicheng Xie, Xilin He, Shenzhen University, China; Siyang Song, University of Leicester, United Kingdom of Great Britain and Northern Ireland; Linlin Shen, Shenzhen University, China
MMSP-P4.6: A NOVEL MULTIMODAL SENTIMENT ANALYSIS MODEL BASED ON GATED FUSION AND MULTI-TASK LEARNING
Xin Sun, Xiangyu Ren, Xiaohao Xie, Beijing Institute of Technology, China
MMSP-P4.7: Modality-dependent sentiments exploring for multi-modal sentiment classification
Jingzhe Li, Chengji Wang, Central China Normal University, China; Zhiming Luo, Xiamen University, China; Yuxian Wu, Xingpeng Jiang, Central China Normal University, China
MMSP-P4.8: EMOTION-ALIGNED CONTRASTIVE LEARNING BETWEEN IMAGES AND MUSIC
Shanti Stewart, Kleanthis Avramidis, Tiantian Feng, Shrikanth Narayanan, University of Southern California, United States of America
MMSP-P4.9: MMRBN: RULE-BASED NETWORK FOR MULTIMODAL EMOTION RECOGNITION
Xi CHEN, Fudan University, China
MMSP-P4.10: INTER-MODALITY AND INTRA-SAMPLE ALIGNMENT FOR MULTI-MODAL EMOTION RECOGNITION
Yusong Wang, Dongyuan Li, Jialun Shen, Tokyo Institute of Technology, China
MMSP-P4.11: MDAVIF: A MULTI-DOMAIN ACOUSTICAL-VISUAL INFORMATION FUSION MODEL FOR DEPRESSION RECOGNITION FROM VLOG DATA
Tianfei Ling, Deyuan Chen, Baobin Li, University of Chinese Academy of Sciences, China
Contacts