Model-driven development of vocal user interfaces

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

There is lack of work addressing simply and extensively the development of vocal user interfaces considering at once the context of use: environment, user and platform. Several works have been published related to vocal user interface considered as a subset of bigger problems, such as: context awareness, multiplatform development, user-centred development, vocal user interface design, and multimodal development. It is normally the case to see that most design knowledge present in the literature assume vocal user interfaces as a subset of graphical user interfaces, called multimodal interaction, thus losing the nature of vocal interaction. The objective for this paper is to propose a method to generate multiplatform vocal User Interfaces. A transformational approach is used for the method. A real life case study is used to validate our proposal. © 2013 Springer International Publishing.

Cite

CITATION STYLE

APA

Céspedes-Hernández, D., González-Calleros, J. M., Guerrero-García, J., & Rodríguez-Vizzuett, L. (2013). Model-driven development of vocal user interfaces. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8278 LNCS, pp. 30–34). https://doi.org/10.1007/978-3-319-03068-5_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free