Abstract
Objective: Neurocomputational modeling of visual stimuli can lead not only to identify the neural substrates of attention but also to test cognitive theories of attention with applications on several visual media, robotics, etc. However, there are many research works done in cognitive model for linguistics, but the studies regarding cognitive modeling of learning mechanisms for visual stimuli are falling back. Based on principles of operation cognitive functionalities in human vision processing, the study presents the development of a computational neurocomputational cognitive model for visual perception with detailed algorithmic descriptions. Methods: Here, four essential questions of cognition and visual attention is considered for logically compressing into one unified neurocomputational model: (i) Segregation of special classes of stimuli and attention modulation, (ii) relation between gaze movements and visual perception, (iii) mechanism of selective stimulus processing and its encoding in neuronal cells, and (iv) mechanism of visual perception through autonomous relation proofing. Results and Conclusion: The contribution of this research modelling data of neurophysiological studies and provide collective evidence for a distributed representation of visual stimuli in the human brain. The outcome of this study will enable health institute in diagnosing brain disorders related with perception development.
Author supplied keywords
Cite
CITATION STYLE
Rai, A., & Jagadeesh Kannan, R. (2017). Neurocomputational modelling of distributed learning from visual stimuli. Asian Journal of Pharmaceutical and Clinical Research, 10, 225–229. https://doi.org/10.22159/ajpcr.2017.v10s1.19645
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.