We propose a probabilistic model that captures contextual information in the form of typical spatial relationships between regions of an image. We represent a region's local context as a combination of the identity of neighbouring regions as well as the geometry of the neighbourhood. We subsequently cluster all the neighbourhood configurations with the same label at the focal region to obtain, for each label, a set of configuration prototypes. We propose an iterative procedure based on belief propagation to infer the labels of regions of a new image given only the observed spatial relationships between the regions and the hitherto learnt prototypes. We validate our approach on a dataset of hand segmented and labelled images of buildings. Performance compares favourably with that of a boosted, non-contextual classifier. © 2008 Springer Berlin Heidelberg.
CITATION STYLE
Heesch, D., Tan, R., & Petrou, M. (2008). Context first. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5342 LNCS, pp. 97–106). https://doi.org/10.1007/978-3-540-89689-0_14
Mendeley helps you to discover research relevant for your work.