A wifi vision-based 3D human mesh reconstruction

6Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this work, we present, Wi-Mesh, a WiFi vision-based 3D human mesh construction system. Our system leverages the advances of WiFi to visualize the shape and deformations of the human body for 3D mesh construction. In particular, it estimates the two-dimensional angle of arrival (2D AoA) of the WiFi signal reflections to enable WiFi devices to "see"the physical environment as we humans do. It then extracts only the images of the human body from the physical environment, and leverages deep learning models to digitize the extracted human body into 3D mesh representation. Experimental evaluation under various indoor environments shows that Wi-Mesh achieves an average vertices location error of 2.58cm and joint position error of 2.24cm.

Cite

CITATION STYLE

APA

Wang, Y., Ren, Y., Chen, Y., & Yang, J. (2022). A wifi vision-based 3D human mesh reconstruction. In Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM (pp. 814–816). Association for Computing Machinery. https://doi.org/10.1145/3495243.3558247

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free