Dynamic Virtual Fixture Generation Based on Intra-Operative 3D Image Feedback in Robot-Assisted Minimally Invasive Thoracic Surgery

3Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

This paper proposes a method for generating dynamic virtual fixtures with real-time 3D image feedback to facilitate human–robot collaboration in medical robotics. Seamless shared control in a dynamic environment, like that of a surgical field, remains challenging despite extensive research on collaborative control and planning. To address this problem, our method dynamically creates virtual fixtures to guide the manipulation of a trocar-placing robot arm using the force field generated by point cloud data from an RGB-D camera. Additionally, the “view scope” concept selectively determines the region for computational points, thereby reducing computational load. In a phantom experiment for robot-assisted port incision in minimally invasive thoracic surgery, our method demonstrates substantially improved accuracy for port placement, reducing error and completion time by (Formula presented.) ((Formula presented.)) and (Formula presented.) ((Formula presented.)), respectively. These results suggest that our proposed approach is promising in improving surgical human–robot collaboration.

Cite

CITATION STYLE

APA

Shi, Y., Zhu, P., Wang, T., Mai, H., Yeh, X., Yang, L., & Wang, J. (2024). Dynamic Virtual Fixture Generation Based on Intra-Operative 3D Image Feedback in Robot-Assisted Minimally Invasive Thoracic Surgery. Sensors, 24(2). https://doi.org/10.3390/s24020492

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free