In this work, we present, Wi-Mesh, a WiFi vision-based 3D human mesh construction system. Our system leverages the advances of WiFi to visualize the shape and deformations of the human body for 3D mesh construction. In particular, it estimates the two-dimensional angle of arrival (2D AoA) of the WiFi signal reflections to enable WiFi devices to "see"the physical environment as we humans do. It then extracts only the images of the human body from the physical environment, and leverages deep learning models to digitize the extracted human body into 3D mesh representation. Experimental evaluation under various indoor environments shows that Wi-Mesh achieves an average vertices location error of 2.58cm and joint position error of 2.24cm.
CITATION STYLE
Wang, Y., Ren, Y., Chen, Y., & Yang, J. (2022). A wifi vision-based 3D human mesh reconstruction. In Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM (pp. 814–816). Association for Computing Machinery. https://doi.org/10.1145/3495243.3558247
Mendeley helps you to discover research relevant for your work.