Convolutional neural networks and transfer learning applied to automatic composition of descriptive music

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Visual and musical arts has been strongly interconnected throughout history. The aim of this work is to compose music on the basis of the visual characteristics of a video. For this purpose, descriptive music is used as a link between image and sound and a video fragment of film Fantasia is deeply analyzed. Specially, convolutional neural networks in combination with transfer learning are applied in the process of extracting image descriptors. In order to establish a relationship between the visual and musical information, Naive Bayes, Support Vector Machine and Random Forest classifiers are applied. The obtained model is subsequently employed to compose descriptive music from a new video. The results of this proposal are compared with those of an antecedent work in order to evaluate the performance of the classifiers and the quality of the descriptive musical composition.

Cite

CITATION STYLE

APA

Martín-Gómez, L., Pérez-Marcos, J., Navarro-Cáceres, M., & Rodríguez-González, S. (2019). Convolutional neural networks and transfer learning applied to automatic composition of descriptive music. In Advances in Intelligent Systems and Computing (Vol. 801, pp. 275–282). Springer Verlag. https://doi.org/10.1007/978-3-319-99608-0_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free