Non-verbal mapping between sound and color - Mapping derived from colored hearing synesthetes and its applications

2Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents an attempt at 'non-verbal mapping' between music and images. We use physical parameters of key, height and timbre as sound, and hue, brightness and chroma as color, to clarify their direct correspondence. First we derive a mapping rule between sound and color from those with such special abilities as 'colored hearing'. Next we apply the mapping to everyday people using a paired comparison test and key identification training, and we find similar phenomena to colored hearing among everyday people. The experimental result shows a possibility that they also have potential of ability of sound and color mapping. © IFIP International Federation for Information Processing 2005.

Cite

CITATION STYLE

APA

Nagata, N., Iwai, D., Wake, S. H., & Inokuchi, S. (2005). Non-verbal mapping between sound and color - Mapping derived from colored hearing synesthetes and its applications. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3711 LNCS, pp. 401–412). https://doi.org/10.1007/11558651_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free