Abstract
The notion of concept drift refers to the phenomenon that the distribution, which is underlying the observed data, changes over time; as a consequence machine learning models may become inaccurate and need adjustment. In this paper we present a novel method to describe concept drift as a whole by means of flows, i.e. the change of direction and magnitude of particles drawn according to the distribution over time. This problem is of importance in the context of monitoring technical devices and systems, since it allows us to adapt models according to the expected drift, and it enables an inspection of the most prominent features where drift manifests itself. The purpose of this paper is to establish a formal definition of this problem and to present a first, yet simple linear method as a proof of concept. Interestingly, we show that a natural choice in terms of normalized expected linear change constitutes the canonical solution for a linear modeling under mild assumptions, which generalizes expected differences on the one hand and expected direction on the other. This first, global linear approach can be extended to a more fine grained method using common localization techniques. We demonstrate the usefulness of our approach by applying it to theoretical and real world data.
Author supplied keywords
Cite
CITATION STYLE
Hinder, F., Kummert, J., & Hammer, B. (2020). Explaining Concept Drift by Mean of Direction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12396 LNCS, pp. 379–390). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-61609-0_30
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.