Classification of Sparse Time Series via Supervised Matrix Factorization

7Citations
Citations of this article
51Readers
Mendeley users who have this article in their library.

Abstract

Data sparsity is an emerging real-world problem observed in a various domains ranging from sensor networks to medical diagnosis. Consecutively, numerous machine learning methods were modeled to treat missing values. Nevertheless, sparsity, defined as missing segments, has not been thoroughly investigated in the context of time-series classification. We propose a novel principle for classifying time series, which in contrast to existing approaches, avoids reconstructing the missing segments in time series and operates solely on the observed ones. Based on the proposed principle, we develop a method that prevents adding noise that incurs during the reconstruction of the original time series. Our method adapts supervised matrix factorization by projecting time series in a latent space through stochastic learning. Furthermore the projected data is built in a supervised fashion via a logistic regression. Abundant experiments on a large collection of 37 data sets demonstrate the superiority of our method, which in the majority of cases outperforms a set of baselines that do not follow our proposed principle.

Cite

CITATION STYLE

APA

Grabocka, J., Nanopoulos, A., & Schmidt-Thieme, L. (2012). Classification of Sparse Time Series via Supervised Matrix Factorization. In Proceedings of the 26th AAAI Conference on Artificial Intelligence, AAAI 2012 (pp. 928–934). AAAI Press. https://doi.org/10.1609/aaai.v26i1.8271

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free