Personalised pose estimation from singleplane moving fluoroscope images using deep convolutional neural networks

1Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Measuring joint kinematics is a key requirement for a plethora of biomechanical research and applications. While x-ray based systems avoid the soft-tissue artefacts arising in skinbased measurement systems, extracting the object's pose (translation and rotation) from the x-ray images is a time-consuming and expensive task. Based on about 106'000 annotated images of knee implants, collected over the last decade with our moving fluoroscope during activities of daily living, we trained a deep-learning model to automatically estimate the 6D poses for the femoral and tibial implant components. By pretraining a single stage of our architecture using renderings of the implant geometries, our approach offers personalised predictions of the implant poses, even for unseen subjects. Our approach predicted the pose of both implant components better than about 0.75 mm (in-plane translation), 25 mm (out-of-plane translation), and 2° (all Euler-angle rotations) over 50% of the test samples. When evaluating over 90% of test samples, which included heavy occlusions and low contrast images, translation performance was better than 1.5 mm (in-plane) and 30 mm (out-ofplane), while rotations were predicted better than 3-4°. Importantly, this approach now allows for pose estimation in a fully automated manner.

Cite

CITATION STYLE

APA

Vogl, F., Schutz, P., Postolka, B., List, R., & Taylor, W. (2022). Personalised pose estimation from singleplane moving fluoroscope images using deep convolutional neural networks. PLoS ONE, 17(6 6). https://doi.org/10.1371/journal.pone.0270596

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free