In Mobile Robotics, visual tracking is an extremely important sub-problem. Some solutions found to reduce the problems arising from partial and total occlusion are the use of multiple robots. In this work, we propose a three-dimensional space target tracking based on a constrained multi-robot visual data fusion on the occurrence of partial and total occlusion. To validate our approach we first implemented a non-cooperative visual tracking where only the data from a single robot is used. Then, a cooperative visual tracking was tested, where the data from a team of robots is fused using a particle filter. To evaluate both approaches, a visual tracking environment with partial and total occlusions was created where the tracking was performed by a team of robots. The result of the experiment shows that the non-cooperative approach presented a lower computational cost than the cooperative approach but the inferred trajectory was impaired by the occlusions, a fact that did not occur in the cooperative approach due to the data fusion.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Amorim, T. G. S., Souto, L. A., P. Do Nascimento, T., & Saska, M. (2021). Multi-Robot Sensor Fusion Target Tracking with Observation Constraints. IEEE Access, 9, 52557–52568. https://doi.org/10.1109/ACCESS.2021.3070180