Bi-modal deep boltzmann machine based musical emotion classification

11Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Music plays an important role in many people’s lives. When listening to music, we usually choose those music pieces that best suit our current moods. However attractive, automating this task remains a challenge. To this end the approaches in the literature exploit different kinds of information (audio, visual, social, etc.) about individual music pieces. In this work, we study the task of classifying music into different mood categories by integrating information from two domains: audio and semantic. We combine information extracted directly from audio with information about the corresponding tracks’ lyrics using a bi-modal Deep Boltzmann Machine architecture and show the effectiveness of this approach through empirical experiments using the largest music dataset publicly available for research and benchmark purposes.

Cite

CITATION STYLE

APA

Huang, M., Rong, W., Arjannikov, T., Jiang, N., & Xiong, Z. (2016). Bi-modal deep boltzmann machine based musical emotion classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9887 LNCS, pp. 199–207). Springer Verlag. https://doi.org/10.1007/978-3-319-44781-0_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free