Extensible and automated model-evaluations with INProVE

4Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Model-based development is gaining more and more importance for the creation of software-intensive embedded systems. One important aspect of software models is model quality. This does not imply functional correctness, but non-functional properties, such as maintainability, scalability, extensibility. Lots of effort was put into development of metrics for control flow models. In the embedded systems domain however, domain specific- and data flow languages are commonly applied for model creation. For these languages, existing metrics are not applicable. Domain and project specific quality metrics therefore are informally defined; tracking conformance to these metrics is a manual and effort consuming task. To resolve this situation, we developed INProVE. INProVE is a model-based framework that supports definition of quality metrics in an intuitive, yet formal notion. It provides automated evaluation of design models through its indicators. Applied in different industry projects to complex models, INProVE has proven its applicability for quality assessment of data flow-oriented design models not only in research, but also in practice. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Kemmann, S., Kuhn, T., & Trapp, M. (2011). Extensible and automated model-evaluations with INProVE. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6598 LNCS, pp. 193–208). https://doi.org/10.1007/978-3-642-21652-7_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free