A vector model of trust for developing trustworthy systems

70Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

All security services rely to a great extent on some notion of trust. However, even today, there is no accepted formalism or technique for the specification of trust and for reasoning about trust. Secure systems have been developed under the premise that concepts like "trusted" or "trustworthy" are well understood, unfortunately without even agreeing to what "trust" means, how to measure it, how to compare two trust values and how to combine two trust values. In this work we propose a new vector model of trust. Our model proposes the notion of different degrees of trust, differentiates between trust and distrust and formalizes the dependence of trust on time. We believe that our model will help answer some of the questions posed earlier. © Springer-Verlag Berlin Heidelberg 2004.

Cite

CITATION STYLE

APA

Ray, I., & Chakraborty, S. (2004). A vector model of trust for developing trustworthy systems. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3193, 260–275. https://doi.org/10.1007/978-3-540-30108-0_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free