Technical Program

Paper Detail

Paper IDB-1-1.3
Paper Title Decoding auditory frequencies and directions based on brain functional features
Authors Mingxi Wang, Gaoyan Zhang, Tianjin University, China
Session B-1-1: Electrical Signals in Human
TimeTuesday, 08 December, 12:30 - 14:00
Presentation Time:Tuesday, 08 December, 13:00 - 13:15 Check your Time Zone
All times are in New Zealand Time (UTC +13)
Topic Biomedical Signal Processing and Systems (BioSiPS):
Abstract Decoding auditory stimuli based on brain function data is of great significance to understand auditory functional mechanism. At present, there is still controversy about whether the brain processing mechanism of auditory stimuli is parallel hierarchical processing or distributed processing [1]. Different from previous studies that used univariate analysis to study auditory processing, this study intends to build a decoding model and study the auditory processing mechanism from the perspective of multivariate pattern analysis. In the study, we analyzed functional MRI data from 27 subjects under perception of different auditory frequencies and directions, and used brain activation and functional connectivity as features, and support vector machine for decoding. The decoding accuracy of different frequencies and directions was 70.7% and 71.6% with brain activation features, and was 73.7% and 77.7% with functional connectivity features. The weighted analysis found that the activation patterns in precuneus and the superior temporal gyrus (STG) contributed to sound frequency discrimination, while STG also represented differences in direction. The connectivity patterns between the bilateral precuneus showed obvious changes under different frequency conditions, while the bilateral middle occipital gyrus and STG showed significant changes under different directions of sound stimulation. The results support a distributed auditory processing model.