Auditory feature parameters for music based on human auditory processes

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Authors aim to show the similarity and difference of sensibility effect when human listen to the music. For concrete example, we assume the music recommendation services. To propose the music that users really want, we use the image words that users feel for the music, and retrieve and propose the music based on the similarity of the image word. We intend to propose the music that users really want based on Kansei engineering. We make the Hierarchical model of Kansei. This model shows that human have the 4 processes when listen the music. Feature parameters are physical frequency data with physiological process. These parameters make us possible to represent how human listen the music. In next step, we represent how human feel for the music. These representations help users possible to retrieve the music really want. For representation of human feelings, we use image words and subjective estimation. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Murakami, M., & Kato, T. (2011). Auditory feature parameters for music based on human auditory processes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6771 LNCS, pp. 612–617). https://doi.org/10.1007/978-3-642-21793-7_69

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free