Combinational Collaborative Filtering, Considering Personalization

  • Chang E
N/ACitations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

For the purpose of multimodal fusion, collaborative filtering can be regarded as a process of finding relevant information or patterns using techniques involving collaboration among multiple views or data sources. In this chapter,\(^\dagger\) we present a collaborative filtering method, combinational collaborative filtering (CCF), to perform recommendations by considering multiple types of co-occurrences from different information sources. CCF differs from the approaches presented in Chaps. 6 and 7 by constructing a latent layer in between the recommended objects and multimodal descriptions of these objects. We use community recommendation throughout this chapter as an example to illustrate critical design points. We first depict a community by two modalities: a collection of documents and a collection of users, respectively. CCF fuses these two modalities through a latent layer. We show how the latent layer is constructed, how multiple modalities are fused, and how the learning algorithm can be both effective and efficient in handling massive amount of data. CCF can be used to perform virtually any multimedia-data recommendation tasks such as recommending labels to images (annotation), recommending images to images (clustering), and images to users (personalized search).

Cite

CITATION STYLE

APA

Chang, E. Y. (2011). Combinational Collaborative Filtering, Considering Personalization. In Foundations of Large-Scale Multimedia Information Management and Retrieval (pp. 171–190). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-20429-6_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free