A boosted cross-domain categorization framework that utilizes labeled data from other visual domains as the auxiliary knowledge for enhancing the original learning system is presented. The source domain data under a different data distribution are adapted to the target domain through both feature representation level and classification level adaptation. The proposed framework is working in conjunction with a learned domain adaptive dictionary pair, so that both the source domain data representations and their distribution are optimized in order to match the target domain. By iteratively updating the weak classifiers, the categorization system allocates more credits to "similar" source domain samples, while abandoning "dissimilar" source domain samples. Using a set of Web images and selected categories from the HMDB51 dataset as the source domain data, the proposed framework is evaluated with both image classification and human action recognition tasks on the Caltech-101 and the UCF YouTube datasets, respectively, achieving promising results.
CITATION STYLE
Zhu, F., Shao, L., & Tang, J. (2014). Boosted cross-domain categorization. In BMVC 2014 - Proceedings of the British Machine Vision Conference 2014. British Machine Vision Association, BMVA. https://doi.org/10.5244/c.28.5
Mendeley helps you to discover research relevant for your work.