Neurodynamics Adaptive Reward and Action for Hand-to-Eye Calibration With Deep Reinforcement Learning

6Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Calibration performed by a robotic manipulator is crucial in the field of industrial intelligent production, as it ensures precise and accurate measurements. In this paper, we present a new method for addressing the hand-to-eye calibration problem using deep reinforcement learning. Our proposed algorithm utilizes an actor-critic framework and incorporates neurodynamics adaptive reward and action functions, which allows for better convergence, reduces the dependence on the initial value, and overcomes the local convergence issues of traditional deep reinforcement learning method. Additionally, we introduce a step-wise mechanism under the guidance of the attention mechanism, and zero stability to handle the complexity of the calibration task in challenging environments. A number of experiments were conducted to demonstrate the validity of the proposed algorithm. The experimental results show that our proposed algorithm can achieve a nearly 100% success rate after training phase. Additionally, we compared our proposed algorithm with other widely used methods, such as deterministic deep policy gradient (DDPG) and soft actor-critic (SAC) to further demonstrate its effectiveness.

Cite

CITATION STYLE

APA

Zheng, Z., Yu, M., Guo, P., & Zeng, D. (2023). Neurodynamics Adaptive Reward and Action for Hand-to-Eye Calibration With Deep Reinforcement Learning. IEEE Access, 11, 60292–60304. https://doi.org/10.1109/ACCESS.2023.3287098

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free