In the physical world, the rules governing behaviour are debugged by observing an outcome that was not intended and the addition of new constraints to prevent the attainment of that outcome. We propose a similar approach to support the incremental development of normative frameworks (also called institutions) and demonstrate how this works through the validation and synthesis of normative rules using model generation and inductive learning. This is achieved by the designer providing a set of use cases, comprising collections of event traces that describe how the system is used along with the desired outcome with respect to the normative framework. The model generator encodes the description of the current behaviour of the system. The current specification and the traces for which current behaviour and expected behaviour do not match are given to the learning framework to propose new rules that revise the existing norm set in order to inhibit the unwanted behaviour. The elaboration of a normative system can then be viewed as a semi-automatic, iterative process for the detection of incompleteness or incorrectness of the existing normative rules, with respect to desired properties, and the construction of potential additional rules for the normative system. © 2011 Springer-Verlag.
CITATION STYLE
Corapi, D., De Vos, M., Padget, J., Russo, A., & Satoh, K. (2011). Norm refinement and design through inductive learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6541 LNAI, pp. 77–94). https://doi.org/10.1007/978-3-642-21268-0_5
Mendeley helps you to discover research relevant for your work.