Invoked computing: Spatial audio and video AR invoked through miming

  • Zerroug, A., Cassinelli, A., Ishikawa M
N/ACitations
Citations of this article
3Readers
Mendeley users who have this article in their library.

Abstract

Direct interaction with everyday objects augmented with artificial affordances is clearly a very efficient approach leveraging natural human interaction capabilities. Hence the idea of conceiving ubiquitous computing as an invisible world which can be “condensed” on real objects. Rich Gold’s [1] described ubiquitous computing field as an “enchanted village” in which people discover hidden affordances in everyday objects. With this project, we explore the reverse scenario: a ubiquitous intelligence capable of discovering affordances suggested or represented symbolically by human beings (as actions and scenarios involving objects and drawings). An example: taking a banana and bringing it closer to the ear. The gesture is clear enough: directional microphones and parametric speakers hidden in the room would make the banana function as a real handset on the spot.

Cite

CITATION STYLE

APA

Zerroug, A., Cassinelli, A., Ishikawa, M. (2011). Invoked computing: Spatial audio and video AR invoked through miming. In Zerroug, A., Cassinelli, A., Ishikawa, M. (pp. 31–32). Laval. Retrieved from http://www.k2.t.u-tokyo.ac.jp/perception/invokedComputing/LV2011_InvokedComputing.pdf

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free