An intrusion detection system usually infers the status of an unknown behavior from limited available ones via model generalization, but the generalization is not perfect. Most existing techniques use it blindly (or only based on specific datasets at least) without considering the difference among various application scenarios. For example, signature-based ones use signatures generated from specific occurrence environments, anomaly-based ones are usually evaluated by a specific dataset. To make matters worse, various techniques have been introduced recently to exploit too stingy or too generous generalization that causes intrusion detection invalid, for example, mimicry attacks, automatic signature variation generation etc. Therefore, a critical task in intrusion detection is to evaluate the effects of model generalization. In this paper, we try to meet the task. First, we divide model generalization into several levels, which are evaluated one by one to identify their significance on intrusion detection. Among our experimental results, the significance of different levels is much different. Under-generalization will sacrifice the detection performance, but over-generalization will not lead to any benefit. Moreover, model generalization is necessary to identify more behaviors in detection, but its implications for normal behaviors are different from those for intrusive ones. © 2007 International Federation for Information Processing.
CITATION STYLE
Li, Z., Das, A., & Zhou, J. (2007). Evaluating the effects of model generalization on intrusion detection performance. In IFIP International Federation for Information Processing (Vol. 232, pp. 421–432). https://doi.org/10.1007/978-0-387-72367-9_36
Mendeley helps you to discover research relevant for your work.