Cross-media and elastic time adaptive presentations: The integration of a talking head tool into a hypermedia formatter

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper describes the integration of a facial animation tool (Expressive Talking Heads - ETHs) with an adaptive hypermedia formatter (HyperProp formatter). This formatter is able to adjust document presentations based on the document temporal constraints (e.g. synchronization relationships), the presentation platform parameters (e.g. available bandwidth and devices), and the user profile (e.g. language, accessibility, etc.). This work describes how ETHs augments the capability for creating adaptive hypermedia documents with HyperProp formatter. The paper also presents the adaptation facilities offered by the main hypermedia language (Nested Context Language - NCL) HyperProp system works with, and details the implementation extensions of Expressive Talking Heads that turned it an adaptive presentation tool. © Springer-Verlag 2004.

Cite

CITATION STYLE

APA

Rodrigues, R. F., Rodrigues, P. S. L., Feijó, B., Velho, L., & Soares, L. F. G. (2004). Cross-media and elastic time adaptive presentations: The integration of a talking head tool into a hypermedia formatter. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3137, 215–224. https://doi.org/10.1007/978-3-540-27780-4_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free