Contrastive training for models of information cascades

2Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

This paper proposes a model of information cascades as directed spanning trees (DSTs) over observed documents. In addition, we propose a contrastive training procedure that exploits partial temporal ordering of node infections in lieu of labeled training links. This combination of model and unsupervised training makes it possible to improve on models that use infection times alone and to exploit arbitrary features of the nodes and of the text content of messages in information cascades. With only basic node and time lag features similar to previous models, the DST model achieves performance with unsupervised training comparable to strong baselines on a blog network inference task. Unsupervised training with additional content features achieves significantly better results, reaching half the accuracy of a fully supervised model.

Cite

CITATION STYLE

APA

Xu, S., & Smith, D. A. (2018). Contrastive training for models of information cascades. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 483–490). AAAI press. https://doi.org/10.1609/aaai.v32i1.11270

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free