Hadoop. TS: Large-Scale Time-Series Processing

  • K¨ampf M
  • W. Kantelhardt J
N/ACitations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

The paper describes a computational framework for time-series analysis. It allows rapid prototyping of new algorithms, since all components are re-usable. Generic data structures repre-sent different types of time series, e. g. event and inter-event time series, and define reliable interfaces to existing big data. Standalone applications, highly scalable MapReduce pro-grams, and User Defined Functions for Hadoop-based anal-ysis frameworks are the major modes of operation. Effi-cient implementations of univariate and bivariate analysis al-gorithms are provided for, e. g., long-term correlation, cross-correlation and event synchronization analysis on large data sets.

Cite

CITATION STYLE

APA

K¨ampf, M., & W. Kantelhardt, J. (2013). Hadoop. TS: Large-Scale Time-Series Processing. International Journal of Computer Applications, 74(17), 1–8. https://doi.org/10.5120/12974-0233

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free