3D hypothesis clustering for cross-view matching in multi-person motion capture

7Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

We present a multiview method for markerless motion capture of multiple people. The main challenge in this problem is to determine cross-view correspondences for the 2D joints in the presence of noise. We propose a 3D hypothesis clustering technique to solve this problem. The core idea is to transform joint matching in 2D space into a clustering problem in a 3D hypothesis space. In this way, evidence from photometric appearance, multiview geometry, and bone length can be integrated to solve the clustering problem efficiently and robustly. Each cluster encodes a set of matched 2D joints for the same person across different views, from which the 3D joints can be effectively inferred. We then assemble the inferred 3D joints to form full-body skeletons for all persons in a bottom–up way. Our experiments demonstrate the robustness of our approach even in challenging cases with heavy occlusion, closely interacting people, and few cameras. We have evaluated our method on many datasets, and our results show that it has significantly lower estimation errors than many state-of-the-art methods.

Cite

CITATION STYLE

APA

Li, M., Zhou, Z., & Liu, X. (2020). 3D hypothesis clustering for cross-view matching in multi-person motion capture. Computational Visual Media, 6(2), 147–156. https://doi.org/10.1007/s41095-020-0171-y

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free