Generalisation over Details: The Unsuitability of Supervised Backpropagation Networks for Tetris

  • Lewis I
  • Beswick S
N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We demonstrate the unsuitability of Artificial Neural Networks (ANNs) to the game of Tetris and show that their great strength, namely, their ability of generalization, is the ultimate cause. This work describes a variety of attempts at applying the Supervised Learning approach to Tetris and demonstrates that these approaches (resoundedly) fail to reach the level of performance of hand-crafted Tetris solving algorithms. We examine the reasons behind this failure and also demonstrate some interesting auxiliary results. We show that training a separate network for each Tetris piece tends to outperform the training of a single network for all pieces; training with randomly generated rows tends to increase the performance of the networks; networks trained on smaller board widths and then extended to play on bigger boards failed to show any evidence of learning, and we demonstrate that ANNs trained via Supervised Learning are ultimately ill-suited to Tetris.

Cite

CITATION STYLE

APA

Lewis, I. J., & Beswick, S. L. (2015). Generalisation over Details: The Unsuitability of Supervised Backpropagation Networks for Tetris. Advances in Artificial Neural Systems, 2015, 1–8. https://doi.org/10.1155/2015/157983

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free