In this paper, I will identify two problems of trust in an AI-relevant context: a theoretical problem and a practical one. I will identify and address a number of skeptical challenges to an AI-relevant theory of trust. In addition, I will identify what I shall term the ‘scope challenge’, which I take to hold for any AI-relevant theory (or collection of theories) of trust that purports to be representationally adequate to the multifarious forms of trust and AI. Thereafter, I will suggest how trust-engineering, a position that is intermediate between the modified pure rational-choice account and an account that gives rise to trustworthy AI, might allow us to address the practical problem of trust, before identifying and critically evaluating two candidate trust-engineering approaches.
CITATION STYLE
Chen, M. (2021). Trust and Trust-Engineering in Artificial Intelligence Research: Theory and Praxis. Philosophy and Technology, 34(4), 1429–1447. https://doi.org/10.1007/s13347-021-00465-4
Mendeley helps you to discover research relevant for your work.