Using pictographic representation, syntactic information and gestures in text entry

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

With the increasing popularity of touch screen mobile devices, it is becoming increasingly important to design fast and reliable methods for text input on such devices. In this work, we exploit the capabilities of those devices and a specific language model to enhance the efficiency of text entry tasks. We will distribute the roles between the user and the device in a way that allocates the tasks to the side where they can be efficiently done. The user is not a good processor of syntactic and memory retrieval operations but she/he is a highly efficient processor for handling semantic and pattern recognition operations. The reverse is true for computational devices. These facts are exploited in two designs for the entry of common words which represent a high percentage of our written and spoken materials. A common word is typed in two or three clicks, with or without a gesture on a touch screen. © 2009 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Sad, H. H., & Poirier, F. (2009). Using pictographic representation, syntactic information and gestures in text entry. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5611 LNCS, pp. 735–744). https://doi.org/10.1007/978-3-642-02577-8_81

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free