Image region annotation based on segmentation and semantic correlation analysis

12Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

The authors propose an image region annotation framework by exploring syntactic and semantic correlations among segmented regions in an image. A texture-enhanced image segmentation JSEG algorithm is first used to improve the pixel consistency in a segmented image region. Next, each region is represented by a set of image codewords, also known as visual alphabets, with each of them used to characterise certain low-level image features. A visual lexicon, with its vocabulary items defined as either a codeword or a co-occurrence of multiple alphabets, is formed and used to model middle-level semantic concepts. The concept classification models are trained by a maximal figure-of-merit algorithm with a collection of training images with multiple correlations, including spatial, syntactic and semantic relationship, between regions and their corresponding concepts. In addition, a region-semantic correlation model constructed with latent semantic analysis is used to correct the potentially wrong annotations by analysing the relationship between image region positions and labels. When evaluated on the Corel 5K dataset, the proposed image region annotation framework achieves accurate results on image region concept tagging as well as whole image based annotations.

Cite

CITATION STYLE

APA

Zhang, J., Mu, Y., Feng, S., Li, K., Yuan, Y., & Lee, C. H. (2018). Image region annotation based on segmentation and semantic correlation analysis. IET Image Processing, 12(8), 1331–1337. https://doi.org/10.1049/iet-ipr.2017.0917

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free