Regularized semi-supervised latent dirichlet allocation for visual concept learning

3Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Topic models are a popular tool for visual concept learning. Current topic models are either unsupervised or fully supervised. Although lots of labeled images can significantly improve the performance of topic models, they are very costly to acquire. Meanwhile, billions of unlabeled images are freely available on the internet. In this paper, to take advantage of both limited labeled training images and rich unlabeled images, we propose a novel technique called regularized Semi-supervised Latent Dirichlet Allocation (r-SSLDA) for learning visual concept classifiers. Instead of introducing a new topic model, we attempt to find an efficient way to learn topic models in a semi-supervised way. r-SSLDA considers both semi-supervised properties and supervised topic model simultaneously in a regularization framework. Experiments on Caltech 101 and Caltech 256 have shown that r-SSLDA outperforms unsupervised LDA, and achieves competitive performance against fully supervised LDA, while sharply reducing the number of labeled images required. © 2011 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Zhuang, L., She, L., Huang, J., Luo, J., & Yu, N. (2011). Regularized semi-supervised latent dirichlet allocation for visual concept learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6523 LNCS, pp. 403–412). https://doi.org/10.1007/978-3-642-17832-0_38

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free