Developing multimodal web interfaces by encapsulating their content and functionality within a multimodal shell

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Web applications are a widely-spread and a widely-used concept for presenting information. Their underlying architecture and standards, in many cases, limit their presentation/control capabilities of showing pre-recorded audio/video sequences. Highly-dynamic text content, for instance, can only be displayed in its native from (as part of HTML content). This paper provides concepts and answers that enable the transformation of dynamic web-based content into multimodal sequences generated by different multimodal services. Based on the encapsulation of the content into a multimodal shell, any text-based data can dynamically and at interactive speeds be transformed into multimodal visually-synthesized speech. Techniques for the integration of multimodal input (e.g. visioning and speech recognition) are also included. The concept of multimodality relies on mashup approaches rather than traditional integration. It can, therefore, extended any type of web-based solution transparently with no major changes to either the multimodal services or the enhanced web-application. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Mlakar, I., & Rojc, M. (2011). Developing multimodal web interfaces by encapsulating their content and functionality within a multimodal shell. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6800 LNCS, pp. 133–146). https://doi.org/10.1007/978-3-642-25775-9_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free