Towards a general vision system based on symbol-relation grammars and Bayesian networks

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A novel approach to create a general vision system is presented. The proposed method is based on a visual grammar representation which is transformed to a Bayesian network which is used for object recognition. We use a symbol-relational grammar for a hierarchical description of objects, incorporating spatial relations. The structure of a Bayesian network is obtained automatically from the grammar, and its parameters are learned from examples. The method is illustrated with two examples for face recognition. © 2011 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Ruiz, E., Melendez, A., & Sucar, L. E. (2011). Towards a general vision system based on symbol-relation grammars and Bayesian networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6830 LNAI, pp. 291–296). https://doi.org/10.1007/978-3-642-22887-2_32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free