Robotic Peg-in-Hole Assembly Strategy Research Based on Reinforcement Learning Algorithm

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

To improve the robotic assembly effects in unstructured environments, a reinforcement learning (RL) algorithm is introduced to realize a variable admittance control. In this article, the mechanisms of a peg-in-hole assembly task and admittance model are first analyzed to guide the control strategy and experimental parameters design. Then, the admittance parameter identification process is defined as the Markov decision process (MDP) problem and solved with the RL algorithm. Furthermore, a fuzzy reward system is established to evaluate the action–state value to solve the complex reward establishment problem, where the fuzzy reward includes a process reward and a failure punishment. Finally, four sets of experiments are carried out, including assembly experiments based on the position control, fuzzy control, and RL algorithm. The necessity of compliance control is demonstrated in the first experiment. The advantages of the proposed algorithms are validated by comparing them with different experimental results. Moreover, the generalization ability of the RL algorithm is tested in the last two experiments. The results indicate that the proposed RL algorithm effectively improves the robotic compliance assembly ability.

Cite

CITATION STYLE

APA

Li, S., Yuan, X., & Niu, J. (2022). Robotic Peg-in-Hole Assembly Strategy Research Based on Reinforcement Learning Algorithm. Applied Sciences (Switzerland), 12(21). https://doi.org/10.3390/app122111149

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free