A Hybrid Learning Strategy for Real Hardware of Swing-Up Pendulum

3Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Generally, the bottom-up learning approaches, such as neural-network, to obtain the optimal controller of target task for mechanical system face a problem including huge number of trials, which require much time and give stress against the hardware. To avoid such problems, a simulator is often built and performed with a learning method. However, there are also problems that how simulator is constructed and how accurate it performs. In this paper, we are considering a construction of simulator directly from the real hardware. Afterward a constructed simulator is used for learning target task and the obtained optimal controller is applied to the real hardware. As an example, we picked up the pendulum swing-up task which was a typical nonlinear control problem. The construction of a simulator is performed by back-propagation method with neural-network and the optimal controller is obtained by reinforcement learning method. Both processes are implemented without using the real hardware after the data sampling, therefore, load against the hardware gets sufficiently smaller, and the objective controller can be obtained faster than using only the hardware. And we consider that our proposed method can be a basic learning strategy to obtain the optimal controller of mechanical systems.

Cite

CITATION STYLE

APA

Nakamura, S., Saegusa, R., & Hashimoto, S. (2007). A Hybrid Learning Strategy for Real Hardware of Swing-Up Pendulum. Journal of Advanced Computational Intelligence and Intelligent Informatics, 11(8), 972–978. https://doi.org/10.20965/jaciii.2007.p0972

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free