Principle components and importance ranking of distributed anomalies

16Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Correlations between locally averaged host observations at different times and places hint at information about the associations between the hosts in a network. These smoothed pseudo-continuous time-series imply relationships with entities in the wider environment. For anomaly detection mining this information might provide a valuable source of observational experience for determining comparative anomalies or rejecting false anomalies. The difficulties with distributed analysis lie in collating the distributed data and in comparing observables on different hosts in different frames of reference. In the present work we examine two methods (Principle Component Analysis and Eigenvector Centrality) that shed light on the usefulness of comparing data destined for different locations in a network. © 2005 Springer Science + Business Media Inc.

Author supplied keywords

Cite

CITATION STYLE

APA

Begnum, K., & Burgess, M. (2005). Principle components and importance ranking of distributed anomalies. Machine Learning, 58(2–3), 217–230. https://doi.org/10.1007/s10994-005-5827-4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free