Reasoning about trust and belief change on a social network: A formal approach

3Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One important aspect of trust is the following: when a trusted source reports some new information, then we are likely to believe that the new information is true. As such, the notion of trust is closely connected to the notion of belief change. In this paper, we demonstrate how a formal model of trust developed in the Artificial Intelligence community can be used to model the dynamics of belief on a social network. We use a formal model to capture the preceived areas of expertise of each agent, and we introduce a logical operator to determine how beliefs change following reported information. Significantly, the trust held in another agent is not determined solely by individual expertise; the extent to which an agent is trusted is also influenced by social relationships between agents. We prove a number of formal properties, and demonstrate that our approach can actually model a wide range of practical trust problems involving social agents. This work is largely foundational, and it connects two different research communities. In particular, this work illustrates how fundamentally logic-based models of reasoning can be applied to solve problems related to trust on social networks.

Cite

CITATION STYLE

APA

Hunter, A. (2017). Reasoning about trust and belief change on a social network: A formal approach. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10701 LNCS, pp. 783–801). Springer Verlag. https://doi.org/10.1007/978-3-319-72359-4_49

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free