A reinforcement learning enhanced pseudo-inverse approach to self-collision avoidance of redundant robots

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Introduction: Redundant robots offer greater flexibility compared to non-redundant ones but are susceptible to increased collision risks when the end-effector approaches the robot's own links. Redundant degrees of freedom (DoFs) present an opportunity for collision avoidance; however, selecting an appropriate inverse kinematics (IK) solution remains challenging due to the infinite possible solutions. Methods: This study proposes a reinforcement learning (RL) enhanced pseudo-inverse approach to address self-collision avoidance in redundant robots. The RL agent is integrated into the redundancy resolution process of a pseudo-inverse method to determine a suitable IK solution for avoiding self-collisions during task execution. Additionally, an improved replay buffer is implemented to enhance the performance of the RL algorithm. Results: Simulations and experiments validate the effectiveness of the proposed method in reducing the risk of self-collision in redundant robots. Conclusion: The RL enhanced pseudo-inverse approach presented in this study demonstrates promising results in mitigating self-collision risks in redundant robots, highlighting its potential for enhancing safety and performance in robotic systems.

Cite

CITATION STYLE

APA

Hong, T., Li, W., & Huang, K. (2024). A reinforcement learning enhanced pseudo-inverse approach to self-collision avoidance of redundant robots. Frontiers in Neurorobotics, 18. https://doi.org/10.3389/fnbot.2024.1375309

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free