It is critical and important to detect anomalies in event sequences, which becomes widely available in many application domains. Indeed, various efforts have been made to capture abnormal patterns from event sequences through sequential pattern analysis or event representation learning. However, existing approaches usually ignore the semantic information of event content. To this end, in this paper, we propose a self-attentive encoder-decoder transformer framework, Content-Aware Transformer CAT, for anomaly detection in event sequences. In CAT, the encoder learns preamble event sequence representations with content awareness, and the decoder embeds sequences under detection into a latent space, where anomalies are distinguishable. Specifically, the event content is first fed to a content-awareness layer, generating representations of each event. The encoder accepts preamble event representation sequence, generating feature maps. In the decoder, an additional token is added at the beginning of the sequence under detection, denoting the sequence status. A one-class objective together with sequence reconstruction loss is collectively applied to train our framework under the label efficiency scheme. Furthermore, CAT is optimized under a scalable and efficient setting. Finally, extensive experiments on three real-world datasets demonstrate the superiority of CAT.
CITATION STYLE
Zhang, S., Liu, Y., Zhang, X., Cheng, W., Chen, H., & Xiong, H. (2022). CAT: Beyond Efficient Transformer for Content-Aware Anomaly Detection in Event Sequences. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 4541–4550). Association for Computing Machinery. https://doi.org/10.1145/3534678.3539155
Mendeley helps you to discover research relevant for your work.