Network Time Series Analysis Using Transfer Entropy

4Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The inference of network representations that capture causal relations in time series is a challenging problem. In this paper, we explore the use of information theoretic tools for characterising information flow between time series, and how to infer networks representing time series data. We explore two different approaches. The first uses transfer entropy as a means of characterising information flow and measures network similarity using Jensen-Shannon divergence. The second uses time series correlation and used Kullback-Leibler divergence to compare the distribution of correlations across edges for different networks. We explore how both weighted and unweighted representations derived from these two characterisations perform on real-world time series data. Experiments on time series data for the New York Stock Exchange show that transfer entropy results in better localisation of temporal anomalies in graph time series. Moreover, the method leads to embeddings of network time series that better preserve their temporal order.

Cite

CITATION STYLE

APA

Caglar, I., & Hancock, E. R. (2019). Network Time Series Analysis Using Transfer Entropy. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11510 LNCS, pp. 194–203). Springer Verlag. https://doi.org/10.1007/978-3-030-20081-7_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free