Self-supervised 3D Human Pose Estimation in Static Video via Neural Rendering

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Inferring 3D human pose from 2D images is a challenging and long-standing problem in the field of computer vision with many applications including motion capture, virtual reality, surveillance or gait analysis for sports and medicine. We present preliminary results for a method to estimate 3D pose from 2D video containing a single person and a static background without the need for any manual landmark annotations. We achieve this by formulating a simple yet effective self-supervision task: our model is required to reconstruct a random frame of a video given a frame from another timepoint and a rendered image of a transformed human shape template. Crucially for optimisation, our ray casting based rendering pipeline is fully differentiable, enabling end to end training solely based on the reconstruction task.

Cite

CITATION STYLE

APA

Schmidtke, L., Hou, B., Vlontzos, A., & Kainz, B. (2023). Self-supervised 3D Human Pose Estimation in Static Video via Neural Rendering. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13803 LNCS, pp. 704–713). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-25066-8_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free