Temporal difference learning in Chinese chess

7Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Reinforcement learning, in general, has not been totally successful at solving complex real-world problems which can be described by nonlinear functions. However, temporal difference learning is a type of reinforcement learning algorithm that has been researched and applied to various prediction problems with promising results. This paper discusses the application of temporal-difference-learning in training a neural network to play a scaled-down version of Chinese Chess. Preliminary results show that this technique is favorable for producing desired results. In test cases where minimal factors of the game are presented, the network responds favorably. However, when introducing more complexity, the network does not function as well, but generally produces reasonable results. These results indicate that temporal difference learning has the potential to solve real-world problems of equal or greater complexity. Continuing research will most likely lead to more responsive and accurate systems in the future.

Cite

CITATION STYLE

APA

Trinh, T. B., Bashi, A. S., & Deshpande, N. (1998). Temporal difference learning in Chinese chess. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1416, pp. 612–618). Springer Verlag. https://doi.org/10.1007/3-540-64574-8_447

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free