Many interactive systems in everyday use carry out roles that are also performed - or have previously been performed - by human beings. Our expectations of how such systems will and, more importantly, should, behave is tempered both by our experience of how humans normally perform in those roles and by our experience and beliefs about what it is possible and reasonable for machines to do. So, an important factor underpinning the acceptability of such systems is the plausibility with which the role they are performing is viewed by their users. We identify three kinds of potential plausibility issue, depending on whether (i) the system is seen by its users to be a machine acting in its own right, or (ii) the machine is seen to be a proxy, either acting on behalf of a human or providing a channel of communication to a human, or (iii) the status of the machine is unclear between the first two cases.
CITATION STYLE
Du Boulay, B., & Luckin, R. (2001). The plausibility problem: An initial analysis. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2117, pp. 289–300). Springer Verlag. https://doi.org/10.1007/3-540-44617-6_28
Mendeley helps you to discover research relevant for your work.