Methods for evaluation of tooltips

0Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Tooltips are context-sensitive help aimed at improving learnability of a system. Evaluation of tooltips would therefore be a part of evaluation of documentation, which is a subcategory of evaluation of software learnability. Previous research only includes two evaluations of tooltips, both gauging learning outcome after initial training, while the purpose of tooltips is helping users whenever in doubt when using systems after training. The previous evaluations are therefore of a low content validity. This paper concerns data field tooltips aimed at improving correctness of data entry. It present studies a scale of content validities. On the low level is a questionnaire on users’ opinion, which is a cheap evaluation. The medium type of evaluation was an adapted question-suggestion test measuring learning outcome. The high content validity evaluation method was a field experiment over two weeks, which demonstrated improved performance caused by tooltips. If the cheap questionnaire came out with the same preferences as the costly experiment, doing the questionnaire could have replaced experiments. However, the experiment did not confirm the results from the questionnaire.

Cite

CITATION STYLE

APA

Isaksen, H., Iversen, M., Kaasbøll, J., & Kanjo, C. (2017). Methods for evaluation of tooltips. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10271, pp. 297–312). Springer Verlag. https://doi.org/10.1007/978-3-319-58071-5_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free