The Dirichlet Belief Network (DirBN) has been recently proposed as a promising approach in learning interpretable deep latent representations for objects. In this work, we leverage its interpretable modelling architecture and propose a deep dynamic probabilistic framework - the Recurrent Dirichlet Belief Network (Recurrent-DBN) - to study interpretable hidden structures from dynamic relational data. The proposed Recurrent-DBN has the following merits: (1) it infers interpretable and organised hierarchical latent structures for objects within and across time steps; (2) it enables recurrent long-term temporal dependence modelling, which outperforms the one-order Markov descriptions in most of the dynamic probabilistic frameworks; (3) the computational cost scales to the number of positive links only. In addition, we develop a new inference strategy, which first upward- and-backward propagates latent counts and then downward-and-forward samples variables, to enable efficient Gibbs sampling for the Recurrent-DBN. We apply the Recurrent-DBN to dynamic relational data problems. The extensive experiment results on real-world data validate the advantages of the Recurrent-DBN over the state-of-the-art models in interpretable latent structure discovery and improved link prediction performance.
CITATION STYLE
Li, Y., Fan, X., Chen, L., Li, B., Yu, Z., & Sisson, S. A. (2020). Recurrent dirichlet belief networks for interpretable dynamic relational data modelling. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 2470–2476). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/342
Mendeley helps you to discover research relevant for your work.