The plausibility problem: An initial analysis

4Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many interactive systems in everyday use carry out roles that are also performed - or have previously been performed - by human beings. Our expectations of how such systems will and, more importantly, should, behave is tempered both by our experience of how humans normally perform in those roles and by our experience and beliefs about what it is possible and reasonable for machines to do. So, an important factor underpinning the acceptability of such systems is the plausibility with which the role they are performing is viewed by their users. We identify three kinds of potential plausibility issue, depending on whether (i) the system is seen by its users to be a machine acting in its own right, or (ii) the machine is seen to be a proxy, either acting on behalf of a human or providing a channel of communication to a human, or (iii) the status of the machine is unclear between the first two cases.

Cite

CITATION STYLE

APA

Du Boulay, B., & Luckin, R. (2001). The plausibility problem: An initial analysis. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2117, pp. 289–300). Springer Verlag. https://doi.org/10.1007/3-540-44617-6_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free