How Useful are Hand-crafted Data? Making cases for anomaly detection methods

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

While the importance of small data has been admitted in principle, they have not been widely adopted as a necessity in current machine learning or data mining research. Most predominantly, machine learning methods were typically evaluated under a “bigger is better” presumption. The more (and the more complex) data we could pour at a method, the better we thought we were at estimating its performance. We deem this mindset detrimental to interpretability, explainability, and the sustained development of the field. For example, despite that new outlier detection methods were often inspired by small, low dimensional samples, their performance has been exclusively evaluated by large, high-dimensional datasets resembling real-world use cases. With these “big data” we miss the chance to gain insights from close looks at how exactly the algorithms perform, as we mere humans cannot really comprehend the samples. In this work, we explore in the exactly opposite direction. We run several classical anomaly detection methods against small, mindfully crafted cases on which the results can be examined in detail. In addition to better understanding of these classical algorithms, our exploration has actually led to the discovery of some novel uses of classical anomaly detection methods to our surprise.

Cite

CITATION STYLE

APA

Du, L., & Hutter, M. (2021). How Useful are Hand-crafted Data? Making cases for anomaly detection methods. In Proceedings of the Annual Hawaii International Conference on System Sciences (Vol. 2020-January, pp. 847–856). IEEE Computer Society. https://doi.org/10.24251/hicss.2021.104

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free