Engineering self-protection for autonomous systems

4Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Security violations occur in systems even if security design is carried out or security tools are deployed. Social engineering attacks, vulnerabilities that can not be captured in the relatively abstract design model (as buffer-overflows), or unclear security requirements are only some examples of such unpredictable or unexpected vulnerabilities. One of the aims of autonomous systems is to react to these unexpected events through the system itself. Subsequently, this goal demands further research about how such behavior can be designed and sufficiently supported throughout the software development process. We present an approach to engineer self-protection rules for autonomous systems that is integrated into a model-driven software engineering process and provides concepts to formally verify that a given intrusion response model satisfies certain security requirements. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Koch, M., & Pauls, K. (2006). Engineering self-protection for autonomous systems. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3922 LNCS, pp. 33–47). https://doi.org/10.1007/11693017_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free