Reinforcement learning does not require explicit robot modeling as it learns on its own based on data, but it has temporal and spatial constraints when transferred to real-world environments. In this research, we trained a balancing Furuta pendulum problem, which is difficult to model, in a virtual environment (Unity) and transferred it to the real world. The challenge of the balancing Furuta pendulum problem is to maintain the pendulum's end effector in a vertical position. We resolved the temporal and spatial constraints by performing reinforcement learning in a virtual environment. Furthermore, we designed a novel reward function that enabled faster and more stable problem-solving compared to the two existing reward functions. We validate each reward function by applying it to the soft actor-critic (SAC) and proximal policy optimization (PPO). The experimental result shows that cosine reward function is trained faster and more stable. Finally, SAC algorithm model using a cosine reward function in the virtual environment is an optimized controller. Additionally, we evaluated the robustness of this model by transferring it to the real environment.
CITATION STYLE
Hong, M. R., Kang, S., Lee, J., Seo, S., Han, S., Koh, J. S., & Kang, D. (2023). Optimizing Reinforcement Learning Control Model in Furuta Pendulum and Transferring it to Real-World. IEEE Access, 11, 95195–95200. https://doi.org/10.1109/ACCESS.2023.3310405
Mendeley helps you to discover research relevant for your work.