The efficient computation of viewpoints while considering various system and process constraints is a common challenge that any robot vision system is confronted with when trying to execute a vision task. Although fundamental research has provided solid and sound solutions for tackling this problem, a holistic framework that poses its formal description, considers the heterogeneity of robot vision systems, and offers an integrated solution remains unaddressed. Hence, this publication outlines the generation of viewpoints as a geometrical problem and introduces a generalized theoretical framework based on Feature-Based Constrained Spaces ((Formula presented.) -spaces) as the backbone for solving it. A (Formula presented.) -space can be understood as the topological space that a viewpoint constraint spans, where the sensor can be positioned for acquiring a feature while fulfilling the constraint. The present study demonstrates that many viewpoint constraints can be efficiently formulated as (Formula presented.) -spaces, providing geometric, deterministic, and closed solutions. The introduced (Formula presented.) -spaces are characterized based on generic domain and viewpoint constraints models to ease the transferability of the present framework to different applications and robot vision systems. The effectiveness and efficiency of the concepts introduced are verified on a simulation-based scenario and validated on a real robot vision system comprising two different sensors.
CITATION STYLE
Magaña, A., Dirr, J., Bauer, P., & Reinhart, G. (2023). Viewpoint Generation Using Feature-Based Constrained Spaces for Robot Vision Systems. Robotics, 12(4). https://doi.org/10.3390/robotics12040108
Mendeley helps you to discover research relevant for your work.