In present-day industrial settings, where robot arms performtasks in an unstructured environment, theremay exist numerous objects of various shapes scattered in randompositions,making it challenging for a robot armto precisely attain the ideal pose to grasp the object. To solve this problem, a multistage robotic arm flexible grasp detection method based on deep learning is proposed. This method first improves the Faster RCNN target detection model, which significantly improves the detection ability of the model for multiscale grasped objects in unstructured scenes. Then, a Squeeze-and-Excitation module is introduced to design a multitarget grasping pose generation network based on a deep convolutional neural network to generate a variety of graspable poses for grasped objects. Finally, a multiobjective IOU mixed area attitude evaluation algorithm is constructed to screen out the optimal grasping area of the grasped object and obtain the optimal grasping posture of the robotic arm. The experimental results show that the accuracy of the target detection network improved by the method proposed in this paper reaches 96.6%, the grasping frame accuracy of the grasping pose generation network reaches 94% and the flexible grasping task of the robotic arm in an unstructured scene in a real environment can be efficiently and accurately implemented.
CITATION STYLE
Fan, Q., Rao, Q., & Huang, H. (2023). Multitarget Flexible Grasping DetectionMethod for Robots in Unstructured Environments. CMES - Computer Modeling in Engineering and Sciences, 137(2), 1825–1848. https://doi.org/10.32604/cmes.2023.028369
Mendeley helps you to discover research relevant for your work.