Evaluations of intrusion detection systems (IDS) require log datasets collected in realistic system environments. Existing testbeds therefore offer user simulations and attack scenarios that target specific use-cases. However, not only does the preparation of such testbeds require domain knowledge and time-consuming work, but also maintenance and modifications for other use-cases involve high manual efforts and repeated execution of tasks. In this article, we therefore propose to generate testbeds for IDS evaluation using strategies from model-driven engineering. In particular, our approach models system infrastructure, simulated normal behavior, and attack scenarios as testbed-independent modules. A transformation engine then automatically generates arbitrary numbers of testbeds, each with a particular set of characteristics and capable of running in parallel. Our approach greatly improves configurability and flexibility of testbeds and allows to reuse components across multiple scenarios. We use our proof-of-concept implementation to generate a labeled dataset for IDS evaluation that is published with this article.
CITATION STYLE
Landauer, M., Skopik, F., Wurzenberger, M., Hotwagner, W., & Rauber, A. (2021). Have it Your Way: Generating Customized Log Datasets with a Model-Driven Simulation Testbed. IEEE Transactions on Reliability, 70(1), 402–415. https://doi.org/10.1109/TR.2020.3031317
Mendeley helps you to discover research relevant for your work.