Upper bounds for adversaries' utility in attack trees

15Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Attack trees model the decision making process of an adversary who plans to attack a certain system. Attack-trees help to visualize possible attacks as Boolean combinations of atomic attacks and to compute attack-related parameters such as cost, success probability and likelihood. The known methods of estimating adversarie's utility are of high complexity and set many unnatural restrictions on adversaries' behavior. Hence, their estimations are incorrect-even if the computed utility is negative, there may still exist beneficial ways of attacking the system. For avoiding unnatural restrictions, we study fully adaptive adversaries that are allowed to try atomic attacks in arbitrary order, depending on the results of the previous trials. At the same time, we want the algorithms to be efficient. To achieve both goals, we do not try to measure the exact utility of adversaries but only upper bounds. If adversaries' utility has a negative upper bound, it is safe to conclude that there are no beneficial ways of attacking the system, assuming that all reasonable atomic attacks are captured by the attack tree. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Buldas, A., & Stepanenko, R. (2012). Upper bounds for adversaries’ utility in attack trees. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7638 LNCS, pp. 98–117). https://doi.org/10.1007/978-3-642-34266-0_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free