Technical Program

Paper Detail

Paper IDC-3-2.4
Paper Title DECODING MUSIC GENRES BASED ON HIGH RESOLUTION BRAIN ACTIVITY INFORMATION
Authors Qinhan Hou, Gaoyan Zhang, Tianjin University, China
Session C-3-2: Machine Learning and Data Analysis 2
TimeThursday, 10 December, 15:30 - 17:15
Presentation Time:Thursday, 10 December, 16:15 - 16:30 Check your Time Zone
All times are in New Zealand Time (UTC +13)
Topic Machine Learning and Data Analytics (MLDA):
Abstract Decoding stimuli from the brain is important for un- derstanding the brain's processing mechanism of external infor- mation, which can also promote the development of brain-com- puter interface. Most of the existing researches focused on the de- coding of audiovisual information. Few studies investigated the decoding of music stimuli, and the decoding accuracy is also not satisfactory. This paper uses a public 7-Tesla fMRI image dataset, which collects the high resolution blood oxygen dependent level (BOLD) signals when 20 subjects listen to 5 music genres. After fMRI data preprocessing, two feature selection methods were used. One is based on a prior template (called MASK) including the Heschl's gyrus (HG), the anterior superior temporal gyrus (aSTG), and the posterior superior temporal gyrus (pSTG). The other one is whole brain analysis of variance (ANOVA). Then, the gradient boosting decision tree (GBDT) algorithm is used to train the decoding models to discriminate different music genres. Re- sults showed that among the five genres, ambient music is easier than the other four categories to be decoded. Compared with the previous study that used the same dataset and the same prior tem- plate but combined with the classifier of support vector ma- chine[1], the GBDT algorithm improved the accuracies in most genres from around 45% to around 65%. Compared with the MASK method, the ANOVA method improved the decoding ac- curacy to larger than 74% for all genres. Analysis of contributive regions in the ANOVA method shows that insular and parietal re- gions are additionally recruited in decoding of music genres, which may be related to music understanding and emotion ex- pression. In summary, the findings in this paper help to under- stand the brain mechanism of music processing in depth. Mean- while, the decoding model proposed in this study can also be used to classify other fMRI stimuli.