Pixelphonics: colocating sound and image in media displays

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Pixelphonics is a prototype system for the colocation of audio sources with their associated visual objects in screen-based media and virtual reality, a technology described in System, Method and Apparatus for Co-locating Visual Images and Associated Sound (International Application No. PCT/CA2018/050433), sponsored by the Innovation Office of Simon Fraser University. The prototype produces a new form of multichannel audiovisual display in which the associated sound emanates from the specific screen or physical enclosure areas of the moving and virtual images, allowing for colocated audio and visuals. The technology adds a new perceptual and experiential layer to the technology of synchronized sound, which has existed now for over a century, by adding its spatial complement, so that sound can now be in place with its image, in addition to being in time with it. This paper presents the premise of the system’s design and the empirical results of the first pilot experiments studying perceptual responses to the presentation of media in the prototype display.

Cite

CITATION STYLE

APA

Filimowicz, M. (2019). Pixelphonics: colocating sound and image in media displays. In Advances in Intelligent Systems and Computing (Vol. 881, pp. 491–505). Springer Verlag. https://doi.org/10.1007/978-3-030-02683-7_35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free