Learning in a distributed software architecture for large-scale neural modeling

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Progress on large-scale simulation of neural models depends in part on the availability of suitable hardware and software architectures. Heterogeneous hardware computing platforms are becoming increasingly popular as substrates for general-purpose simulation. On the other hand, recent work highlights that certain constraints on neural models must be imposed on neural and synaptic dynamics in order to take advantage of such systems. In this paper we focus on constraints related to learning in a simple visual system and those imposed by a new neural simulator for heterogeneous hardware systems, CogExMachina (Cog). © 2012 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering.

Cite

CITATION STYLE

APA

Léveillé, J., Ames, H., Chandler, B., Gorchetchnikov, A., Mingolla, E., Patrick, S., & Versace, M. (2012). Learning in a distributed software architecture for large-scale neural modeling. In Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering (Vol. 87 LNICST, pp. 659–666). https://doi.org/10.1007/978-3-642-32615-8_65

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free