Learning dynamic Bayesian networks from multivariate time series with changing dependencies

13Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many examples exist of multivariate time series where dependencies between variables change over time. If these changing dependencies are not taken into account, any model that is learnt from the data will average over the different dependency structures. Paradigms that try to explain underlying processes and observed events in multivariate time series must explicitly model these changes in order to allow non-experts to analyse and understand such data. In this paper we have developed a method for generating explanations in multivariate time series that takes into account changing dependency structure. We make use of a dynamic Bayesian network model with hidden nodes. We introduce a representation and search technique for learning such models from data and test it on synthetic time series and real-world data from an oil refinery, both of which contain changing underlying structure. We compare our method to an existing EM-based method for learning structure. Results are very promising for our method and we include sample explanations, generated from models learnt from the refinery dataset. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Tucker, A., & Liu, X. (2003). Learning dynamic Bayesian networks from multivariate time series with changing dependencies. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2810, 100–110. https://doi.org/10.1007/978-3-540-45231-7_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free