Estimating conditional transfer entropy in time series using mutual information and nonlinear prediction

17Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

We propose a new estimator to measure directed dependencies in time series. The dimensionality of data is first reduced using a new non-uniform embedding technique, where the variables are ranked according to a weighted sum of the amount of new information and improvement of the prediction accuracy provided by the variables. Then, using a greedy approach, the most informative subsets are selected in an iterative way. The algorithm terminates, when the highest ranked variable is not able to significantly improve the accuracy of the prediction as compared to that obtained using the existing selected subsets. In a simulation study, we compare our estimator to existing state-of-the-art methods at different data lengths and directed dependencies strengths. It is demonstrated that the proposed estimator has a significantly higher accuracy than that of existing methods, especially for the difficult case, where the data are highly correlated and coupled. Moreover, we show its false detection of directed dependencies due to instantaneous couplings effect is lower than that of existing measures. We also show applicability of the proposed estimator on real intracranial electroencephalography data.

Cite

CITATION STYLE

APA

Baboukani, P. S., Graversen, C., Alickovic, E., & Østergaard, J. (2020). Estimating conditional transfer entropy in time series using mutual information and nonlinear prediction. Entropy, 22(10), 1–21. https://doi.org/10.3390/e22101124

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free