Testing metamodels

15Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we deal with errors in metamodels. Metamodels define the abstract syntax of modeling languages. They play a central role in the Model-Driven Architecture. Other artifacts like models or tools are based on them and have to be changed if the metamodel is changed. Consequently, correcting errors in a metamodel can be quite expensive as dependent artifacts have to be adapted to the corrected metamodel. We argue that metamodels should be tested systematically with automated tests. We present a corresponding approach that allows automated metamodel testing based on a test specification. From a test specification, multiple test models can be derived. Each test model defines a potential instance of the metamodel under test. A positive test model defines a potential instance that should be an actual instance of the metamodel; a negative test model defines one that should not. We exemplify our approach with a metamodel for defining a company's structure. Finally, we present MMUnit, an implementation of our approach that builds on the Eclipse platform and integrates the JUnit framework. MMUnit allows to test EMF-based metamodels, which can contain additional constraints, e.g. constraints expressed in OCL. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Sadilek, D. A., & Weißleder, S. (2008). Testing metamodels. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5095 LNCS, pp. 294–309). Springer Verlag. https://doi.org/10.1007/978-3-540-69100-6_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free