Multitarget Flexible Grasping DetectionMethod for Robots in Unstructured Environments

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In present-day industrial settings, where robot arms performtasks in an unstructured environment, theremay exist numerous objects of various shapes scattered in randompositions,making it challenging for a robot armto precisely attain the ideal pose to grasp the object. To solve this problem, a multistage robotic arm flexible grasp detection method based on deep learning is proposed. This method first improves the Faster RCNN target detection model, which significantly improves the detection ability of the model for multiscale grasped objects in unstructured scenes. Then, a Squeeze-and-Excitation module is introduced to design a multitarget grasping pose generation network based on a deep convolutional neural network to generate a variety of graspable poses for grasped objects. Finally, a multiobjective IOU mixed area attitude evaluation algorithm is constructed to screen out the optimal grasping area of the grasped object and obtain the optimal grasping posture of the robotic arm. The experimental results show that the accuracy of the target detection network improved by the method proposed in this paper reaches 96.6%, the grasping frame accuracy of the grasping pose generation network reaches 94% and the flexible grasping task of the robotic arm in an unstructured scene in a real environment can be efficiently and accurately implemented.

Cite

CITATION STYLE

APA

Fan, Q., Rao, Q., & Huang, H. (2023). Multitarget Flexible Grasping DetectionMethod for Robots in Unstructured Environments. CMES - Computer Modeling in Engineering and Sciences, 137(2), 1825–1848. https://doi.org/10.32604/cmes.2023.028369

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free