When collaborating with an artificial intelligence (AI) system, we need to assess when to trust its recommendations. Suppose we mistakenly trust it in regions where it is likely to err. In that case, catastrophic failures may occur, hence the need for Bayesian approaches for reasoning and learning to determine the confidence (or epistemic uncertainty) in the probabilities of the queried outcome. Pure Bayesian methods, however, suffer from high computational costs. To overcome them, we revert to efficient and effective approximations. In this paper, we focus on techniques that take the name of evidential reasoning and learning from the process of Bayesian update of given hypotheses based on additional evidence. This paper provides the reader with a gentle introduction to the area of investigation, the up-to-date research outcomes, and the open questions still left unanswered.
CITATION STYLE
Cerutti, F., Kaplan, L. M., & Şensoy, M. (2022). Evidential Reasoning and Learning: a Survey. In IJCAI International Joint Conference on Artificial Intelligence (pp. 5418–5425). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/760
Mendeley helps you to discover research relevant for your work.