On interpretability of fuzzy models

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Interpretability is one of the indispensable features of fuzzy models. This paper discusses the interpretability of fuzzy models with/without prior knowledge about the target system. Without prior knowledge, conciseness of fuzzy models helps humans to interpret their input-output relationships. In the case where a human has the knowledge in advance, an interpretable model could be the one that explicitly explains his/her knowledge. Experimental results show that the concise model has the essential interpretable feature. The results also show that human’s knowledge changes the most interpretable model from the most concise model.

Cite

CITATION STYLE

APA

Furuhashi, T. (2002). On interpretability of fuzzy models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2275, pp. 12–19). Springer Verlag. https://doi.org/10.1007/3-540-45631-7_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free