Towards dependable and explainable machine learning using automated reasoning

16Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The ability to learn from past experience and improve in the future, as well as the ability to reason about the context of problems and extrapolate information from what is known, are two important aspects of Artificial Intelligence. In this paper, we introduce a novel automated reasoning based approach that can extract valuable insights from classification and prediction models obtained via machine learning. A major benefit of the proposed approach is that the user can understand the reason behind the decision-making of machine learning models. This is often as important as good performance. Our technique can also be used to reinforce user-specified requirements in the model as well as to improve the classification and prediction.

Cite

CITATION STYLE

APA

Bride, H., Dong, J., Dong, J. S., & Hóu, Z. (2018). Towards dependable and explainable machine learning using automated reasoning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11232 LNCS, pp. 412–416). Springer Verlag. https://doi.org/10.1007/978-3-030-02450-5_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free