Content determination for natural language descriptions of predictive Bayesian networks

3Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

The dramatic success of Artificial Intelligence and its applications has been accompanied by an increasing complexity, which makes its comprehension for final users more difficult and damages their trustworthiness. Within this context, the emergence of Explainable AI aims to make intelligent systems decisions more transparent and understandable for human users. In this paper, we propose a framework for the explanation of predictive inference in Bayesian Networks (BN) in natural language to non-specialized users. The model represents the embedded information in the BN by means of (fuzzy) quantified statements and reasons using the a fuzzy syllogism. The framework provides how this can be used for the content determination stage in Natural Language Generation explanation systems for BNs. Through a number of realistic scenarios of use examples, we show how the generated explanations allows the user to trace the inference steps in the approximate reasoning process in predictive BNs.

Cite

CITATION STYLE

APA

Pereira-Fariña, M., & Bugarín, A. (2020). Content determination for natural language descriptions of predictive Bayesian networks. In Proceedings of the 11th Conference of the European Society for Fuzzy Logic and Technology, EUSFLAT 2019 (pp. 784–791). Atlantis Press. https://doi.org/10.2991/eusflat-19.2019.107

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free