Neurocomputational models of visualisation: A preliminary report

5Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

How can a system with visual input become capable of visualising what is meant by new combinations of known words? For example, it is possible for most of us to visualise a blue banana with red spots even though such an object would never have formed part of our experience. In this paper we discuss a neural system which is capable of simple forms of this kind of visualisation. It is shown that success in this task depends on the activity of a neural module whose firing patterns represent tile 'visual awarcncss' of the system and the way than this module interacts with others in the system. This paper discloses the first set of results from this ongoing research project.

Cite

CITATION STYLE

APA

Aleksander, I., Dunmall, B., & Del Frate, V. (1999). Neurocomputational models of visualisation: A preliminary report. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1606, pp. 798–805). Springer Verlag. https://doi.org/10.1007/BFb0098238

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free