Rendering unto Cæsar the things that are Cæsar's: Complex trust models and human understanding

10Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this position paper we examine some of the aspects of trust models, deployment, use and 'misuse,' and present a manifesto for the application of computational trust in sociotechnical systems. Computational Trust formalizes the trust processes in humans in order to allow artificial systems to better make decisions or give better advice. This is because trust is flexible, readily understood, and relatively robust. Since its introduction in the early '90s, it has gained in popularity because of these characteristics. However, what it has oftentimes lost is understandability. We argue that one of the original purposes of computational trust reasoning was the human element - the involvement of humans in the process of decision making for tools, importantly at the basic level of understanding why the tools made the decisions they did. The proliferation of ever more complex models may serve to increase the robustness of trust management in the face of attack, but does little to help mere humans either understand or, if necessary, intervene when the trust models fail or cannot arrive at a sensible decision. © 2012 IFIP International Federation for Information Processing.

Cite

CITATION STYLE

APA

Marsh, S., Basu, A., & Dwyer, N. (2012). Rendering unto Cæsar the things that are Cæsar’s: Complex trust models and human understanding. In IFIP Advances in Information and Communication Technology (Vol. 374 AICT, pp. 191–200). Springer New York LLC. https://doi.org/10.1007/978-3-642-29852-3_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free