Trust of Learning Systems: Considerations for Code, Algorithms, and Affordances for Learning

  • Lyons J
  • Ho N
  • Friedman J
  • et al.
N/ACitations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This chapter provides a synthesis on the literature for Machine Learning (ML), trust in automation, trust in code, and transparency. The chapter introduces the concept of ML and discusses three drivers of trust in ML-based systems: code structure; algorithm performance, transparency, and error management - algorithm factors; and affordances for learning. Code structure offers a static affordance for trustworthiness evaluations that can be both deep and peripheral. The overall performance of the algorithms and the transparency of the inputs, process, and outputs provide an opportunity for dynamic and experiential trustworthiness evaluations. Predictability and understanding are the foundations of trust and must be considered in ML applications. Many ML paradigms neglect the notion of environmental affordances for learning, which from a trust perspective, may in fact be the most important differentiator between ML systems and traditional automation. The learning affordances provide contextualised pedigree for trust considerations. In combination, the trustworthiness aspects of the code, dynamic performance and transparency, and learning affordances offer structural, evidenced performance and understanding, as well as pedigree information from which ML approaches can be evaluated.

Cite

CITATION STYLE

APA

Lyons, J., Ho, N., Friedman, J., Alarcon, G., & Guznov, S. (2018). Trust of Learning Systems: Considerations for Code, Algorithms, and Affordances for Learning (pp. 265–278). https://doi.org/10.1007/978-3-319-90403-0_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free