Skip to main content

PiGraphs: Learning interaction snapshots from observations

66Citations
Citations of this article
69Readers
Mendeley users who have this article in their library.
Get full text
This PDF is freely available from an open access repository. It may not have been peer-reviewed.

Abstract

We learn a probabilistic model connecting human poses and arrangements of object geometry from real-world observations of interactions collected with commodity RGB-D sensors. This model is encoded as a set of prototypical interaction graphs (PiGraphs), a human-centric representation capturing physical contact and visual attention linkages between 3D geometry and human body parts. We use this encoding of the joint probability distribution over pose and geometry during everyday interactions to generate interaction snapshots, which are static depictions of human poses and relevant objects during human-object interactions. We demonstrate that our model enables a novel human-centric understanding of 3D content and allows for jointly generating 3D scenes and interaction poses given terse high-level specifications, natural language, or reconstructed real-world scene constraints.

Cite

CITATION STYLE

APA

Savva, M., Chang, A. X., Hanrahan, P., Fisher, M., & Nießner, M. (2016). PiGraphs: Learning interaction snapshots from observations. In ACM Transactions on Graphics (Vol. 35). Association for Computing Machinery. https://doi.org/10.1145/2897824.2925867

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free