R2-AD2: Detecting Anomalies by Analysing the Raw Gradient

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Neural networks follow a gradient-based learning scheme, adapting their mapping parameters by back-propagating the output loss. Samples unlike the ones seen during training cause a different gradient distribution. Based on this intuition, we design a novel semi-supervised anomaly detection method called R2-AD2. By analysing the temporal distribution of the gradient over multiple training steps, we reliably detect point anomalies in strict semi-supervised settings. Instead of domain dependent features, we input the raw gradient caused by the sample under test to an end-to-end recurrent neural network architecture. R2-AD2 works in a purely data-driven way, thus is readily applicable in a variety of important use cases of anomaly detection.

Cite

CITATION STYLE

APA

Schulze, J. P., Sperl, P., Răduțoiu, A., Sagebiel, C., & Böttinger, K. (2023). R2-AD2: Detecting Anomalies by Analysing the Raw Gradient. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13713 LNAI, pp. 209–224). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-26387-3_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free