Auditory browsing interface of ambient and parallel sound expression for supporting one-to-many communication

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we introduce an auditory browsing system for supporting one-to-many communication in parallel with an ongoing discourse, lecture, or presentation. The live reactions of audiences should reflect the main speech from the viewpoint of active participation. In order to browse numerous live comments from audiences, the speaker stretches her/his neck toward a particular section of the virtual audience group. We adopt the metaphor of “looking inside” toward the direction of the seating position with repositioned and overlaid audiences’ voices corresponding to the length of the voice regardless of the seating of real audiences. As a result, the speaker could browse the comments of the audience and show the communicative behaviors when she/he was interested in a particular group of the audience’s utterances.

Cite

CITATION STYLE

APA

Yonezawa, T. (2015). Auditory browsing interface of ambient and parallel sound expression for supporting one-to-many communication. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9189, pp. 224–233). Springer Verlag. https://doi.org/10.1007/978-3-319-20804-6_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free