In this work we investigate the coordination of human-machine interactions from a bird’s-eye view using a single panoramic color camera. Our approach replaces conventional physical hardware sensors, such as light barriers and switches, by location-aware virtual regions. We employ recent methods from the field of pose estimation to detect human and robot joint configurations. By fusing 2D human and robot pose information with prior scene knowledge, we can lift these perceptions to a 3D metric space. In this way, our system can initiate environmental reactions induced by geometric events among humans, robots and virtual regions. We demonstrate the diverse application possibilities and robustness of our system in three use cases.
CITATION STYLE
Heindl, C., Stübl, G., Pönitz, T., Pichler, A., & Scharinger, J. (2019). Demo: Visual large-scale industrial interaction processing. In UbiComp/ISWC 2019- - Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers (pp. 280–283). Association for Computing Machinery, Inc. https://doi.org/10.1145/3341162.3343769
Mendeley helps you to discover research relevant for your work.