Learning to solve small instances of a problem should help in solving large instances. Unfortunately, most neural network architectures do not exhibit this form of scalability. Our Multi-Dimensional Recurrent LSTM Networks, however, show a high degree of scalability, as we empirically show in the domain of flexible-size board games. This allows them to be trained from scratch up to the level of human beginners, without using domain knowledge. © 2009 Springer Berlin Heidelberg.
CITATION STYLE
Schaul, T., & Schmidhuber, J. (2009). Scalable neural networks for board games. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5768 LNCS, pp. 1005–1014). https://doi.org/10.1007/978-3-642-04274-4_103
Mendeley helps you to discover research relevant for your work.