Perceiving humans in the context of Intelligent Transportation Systems (ITS) often relies on multiple cameras or expensive LiDAR sensors. In this work, we present a new cost-effective vision-based method that perceives humans' locations in 3D and their body orientation from a single image. We address the challenges related to the ill-posed monocular 3D tasks by proposing a neural network architecture that predicts confidence intervals in contrast to point estimates. Our neural network estimates human 3D body locations and their orientation with a measure of uncertainty. Our proposed solution (i) is privacy-safe, (ii) works with any fixed or moving cameras, and (iii) does not rely on ground plane estimation. We demonstrate the performance of our method with respect to three applications: locating humans in 3D, detecting social interactions, and verifying the compliance of recent safety measures due to the COVID-19 outbreak. We show that it is possible to rethink the concept of 'social distancing' as a form of social interaction in contrast to a simple location-based rule. We publicly share the source code towards an open science mission.
CITATION STYLE
Bertoni, L., Kreiss, S., & Alahi, A. (2022). Perceiving Humans: From Monocular 3D Localization to Social Distancing. IEEE Transactions on Intelligent Transportation Systems, 23(7), 7401–7418. https://doi.org/10.1109/TITS.2021.3069376
Mendeley helps you to discover research relevant for your work.