Having effective methods to access the images with desired object is essential nowadays with the availability of huge amount of digital images. We propose a semantic higher-level visual representation which improves the traditional part-based bag-of words image representation, in two aspects. First, we propose a semantic model to generate a semantic visual words and phrases in order to bridge the semantic gab factor. Second, the approach strengthens the discrimination power of classical visual words by constructing an mid level descriptor, Semantic Visual Phrase, from frequently co-occurring Semantic Visual Words set in the same local context. © 2011 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
El Sayad, I., Martinet, J., Urruty, T., & Dejraba, C. (2011). A semantic higher-level visual representation for object recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6523 LNCS, pp. 251–261). https://doi.org/10.1007/978-3-642-17832-0_24
Mendeley helps you to discover research relevant for your work.