On aggregating probabilistic evidence

10Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Imagine a database – a set of propositions Γ = {F1, . . . , Fn} with some kind of probability estimates, and let a proposition X logically follow from Γ. What is the best justified lower bound of the probability of X? The traditional approach, e.g., within Adams’ Probability Logic, computes the numeric lower bound for X corresponding to the worstcase scenario. We suggest a more flexible parameterized approach by assuming probability events u1, u2, . . . , un which support Γ, and calculating aggregated evidence e(u1, u2, . . . , un) for X. The probability of e provides a tight lower bound for any, not only a worst-case, situation. The problem is formalized in a version of justification logic and the conclusions are supported by corresponding completeness theorems. This approach can handle conflicting and inconsistent data and allows the gathering both positive and negative evidence for the same proposition.

Cite

CITATION STYLE

APA

Artemov, S. (2016). On aggregating probabilistic evidence. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9537, pp. 27–42). Springer Verlag. https://doi.org/10.1007/978-3-319-27683-0_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free