Learning manifold representation from multimodal data for event detection in flickr-like social media

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this work, a three-stage social event detection model is devised to discover events in Flickr data. As the features possessed by the data are typically heterogeneous, a multimodal fusion model (M2F) exploits a soft-voting strategy and a reinforcing model is devised to learn fused features in the first stage. Furthermore, a Laplacian non-negative matrix factorization (LNMF) model is exploited to extract compact manifold representation. Particularly, a Laplacian regularization term constructed on the multimodal features is introduced to keep the geometry structure of the data. Finally, clustering algorithms can be applied seamlessly in order to detect event clusters. Extensive experiments conducted on the real-world dataset reveal the M2F-LNMF-based approaches outperform the baselines.

Cite

CITATION STYLE

APA

Yang, Z., Li, Q., Liu, W., & Ma, Y. (2016). Learning manifold representation from multimodal data for event detection in flickr-like social media. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9645, pp. 160–167). Springer Verlag. https://doi.org/10.1007/978-3-319-32055-7_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free