Where are my intelligent assistant's mistakes? A systematic testing approach

14Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Intelligent assistants are handling increasingly critical tasks, but until now, end users have had no way to systematically assess where their assistants make mistakes. For some intelligent assistants, this is a serious problem: if the assistant is doing work that is important, such as assisting with qualitative research or monitoring an elderly parent's safety, the user may pay a high cost for unnoticed mistakes. This paper addresses the problem with WYSIWYT/ML (What You See Is What You Test for Machine Learning), a human/computer partnership that enables end users to systematically test intelligent assistants. Our empirical evaluation shows that WYSIWYT/ML helped end users find assistants' mistakes significantly more effectively than ad hoc testing. Not only did it allow users to assess an assistant's work on an average of 117 predictions in only 10 minutes, it also scaled to a much larger data set, assessing an assistant's work on 623 out of 1,448 predictions using only the users' original 10 minutes' testing effort. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Kulesza, T., Burnett, M., Stumpf, S., Wong, W. K., Das, S., Groce, A., … McIntosh, K. (2011). Where are my intelligent assistant’s mistakes? A systematic testing approach. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6654 LNCS, pp. 171–186). https://doi.org/10.1007/978-3-642-21530-8_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free