Designing Trust in Artificial Intelligence: A Comparative Study Among Specifications, Principles and Levels of Control

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a comparative study amongst the three main frameworks acknowledged for designing trust in AI; specifications, principles and the levels of control necessary to underpin trust in order to address the rising concerns of Highly Automated Systems (HAS). We will also address trust design in four case studies specifically designed to address the rising concerns of these systems in the area of health and wellbeing. Based on the results, levels of control emerge as at the most reliable option to design trust in Highly Automated Systems, as it provides a more structured focus than specifications and principles. However, principles enhance philosophical inquiry to frame the intended outcome and specifications provide a constructive space for product development. In this context, the authors recommend the integration of all the frameworks into a multi-dimensional cross-disciplinary framework to build and extend robustness throughout the entire interactive lifecycle in the development of future applications.

Cite

CITATION STYLE

APA

Galdon, F., Hall, A., & Ferrarello, L. (2020). Designing Trust in Artificial Intelligence: A Comparative Study Among Specifications, Principles and Levels of Control. In Advances in Intelligent Systems and Computing (Vol. 1152 AISC, pp. 97–102). Springer. https://doi.org/10.1007/978-3-030-44267-5_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free