Using explanations to improve the validation of executable models

8Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Executable modeling languages are now found in several commercial and experimental CASE environments. The ability to execute conceptual models has been considered vital for the validation of their dynamic properties. However, during model execution only observable aspects can be validated, and it can be very hard to check whether the internal computations are modeled correctly. Traces provide some of the informations necessary to understand the model’s behavior in depth, but they are often just as difficult to read as the conceptual models themselves. In this paper we show how explanation generation technology can be combined with the execution of conceptual models. A component that can explain the internal reasoning of the model as well as its user interaction is presented, and it is also indicated how the component is being implemented in the PPP CASE environment. Using this integrated validation approach, we are able to expose both the external and internal reasoning of executable conceptual models.

Cite

CITATION STYLE

APA

Gulla, J. A., & Willumsen, G. (1993). Using explanations to improve the validation of executable models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 685 LNCS, pp. 118–142). Springer Verlag. https://doi.org/10.1007/3-540-56777-1_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free