Tri-level combination for image representation

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The context of objects can provide auxiliary discrimination beyond objects. However, this effective information has not been fully explored. In this paper, we propose Tri-level Combination for Image Representation (TriCoIR) to solve the problem at three different levels: object intrinsic, strongly-related context and weakly-related context. Object intrinsic excludes external disturbances and more focuses on the objects themselves. Strongly-related context is cropped from the input image with a more loose bound to contain surrounding context. Weaklyrelated one is recovered from the image other than object for global context. First, strongly and weakly-related context are constructed from input images. Second, we make cascade transformations for more intrinsical object information, which depends on the consistency between generated global context and input images in the regions other than object. Finally, a joint representation is acquired based on these three level features. The experiments on two benchmark datasets prove the effectiveness of TriCoIR.

Cite

CITATION STYLE

APA

Li, R., Zhang, C., & Huang, Q. (2016). Tri-level combination for image representation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9916 LNCS, pp. 250–259). Springer Verlag. https://doi.org/10.1007/978-3-319-48890-5_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free