Explanations in knowledge systems: Design for Explainable Expert Systems

113Citations
Citations of this article
67Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The Explainable Expert Systems framework (EES), in which the focus is on capturing those design aspects that are important for producing good explanations, including justifications of the system's actions, explications of general problem-solving strategies, and descriptions of the system's terminology, is discussed. EES was developed as part of the Strategic Computing Initiative of the US Dept. of Defense's Defense Advanced Research Projects Agency (DARPA). Both the general principles from which the system was derived and how the system was derived from those principles can be represented in EES. The Program Enhancement Advisor, which is the main prototype on which the explanation work has been developed and tested, is presented. PEA is an advice system that helps users improve their Common Lisp programs by recommending transformations that enhance the user's code. How EES produces better explanations is shown.

Cite

CITATION STYLE

APA

Swartout, W., Paris, C., & Moore, J. (1991). Explanations in knowledge systems: Design for Explainable Expert Systems. IEEE Expert, 6(3), 58–64. https://doi.org/10.1109/64.87686

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free