We present a method for extracting geometric and relational structures from raw intensity data. On one hand, low-level image processing extracts isolated features. On the other hand, image interpretation uses sophisticated object descriptions in representation frameworks such as semantic networks. We suggest an intermediate-level description between low- and high-level vision. This description is produced by grouping image features into more and more abstract structures. First, we motivate our choice with respect to what should be represented and we stress the limitations inherent with the use of sensory data. Second, we describe our current implementation and illustrate it with various examples.
CITATION STYLE
Horaud, R., Veillon, F., & Skordas, T. (1990). Finding geometric and relational structures in an image. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 427 LNCS, pp. 374–384). Springer Verlag. https://doi.org/10.1007/BFb0014886
Mendeley helps you to discover research relevant for your work.