Human–robot collaborative assembly based on eye-hand and a finite state machine in a virtual environment

27Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.

Abstract

With the development of the global economy, the demand for manufacturing is increasing. Accordingly, human–robot collaborative assembly has become a research hotspot. This paper aims to solve the efficiency problems inherent in traditional human-machine collaboration. Based on eye–hand and finite state machines, a collaborative assembly method is proposed. The method determines the human’s intention by collecting posture and eye data, which can control a robot to grasp an object, move it, and perform co-assembly. The robot’s automatic path planning is based on a probabilistic roadmap planner. Virtual reality tests show that the proposed method is more efficient than traditional methods.

Cite

CITATION STYLE

APA

Zhao, X., He, Y., Chen, X., & Liu, Z. (2021). Human–robot collaborative assembly based on eye-hand and a finite state machine in a virtual environment. Applied Sciences (Switzerland), 11(12). https://doi.org/10.3390/app11125754

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free