Cross-modal search on social networking systems by exploring Wikipedia concepts

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The increasing popularity of social networking systems (SNSs) has created large quantities of data from multiple modalities such as text and image. Retrieval of data, however, is constrained to a specific modality. Moreover, text on SNSs is usually short and noisy, and remains active for a (short) period. Such characteristics, conflicting with settings of traditional text search techniques, render them ineffective in SNSs. To alleviate these problems and bridge the gap between searches over different modalities, we propose a new algorithm that supports cross-modal search about social documents as text and images on SNSs. By exploiting Wikipedia concepts, text and images are transformed into a set of common concepts, based on which searches are conducted. A new ranking algorithm is designed to rank social documents based on their informativeness and semantic relevance to a query. We evaluate our ranking algorithm on both Twitter and Facebook datasets. The results confirm the effectiveness of our approach.

Cite

CITATION STYLE

APA

Wang, W., Yang, X., & Jiang, S. (2016). Cross-modal search on social networking systems by exploring Wikipedia concepts. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10075 LNCS, pp. 381–393). Springer Verlag. https://doi.org/10.1007/978-3-319-49304-6_41

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free