Recently, person-following robots have been increasingly used in many real-world applications, and they require robust and accurate person identification for tracking. Recent works proposed to use re-identification metrics for identification of the target person; however, these metrics suffer due to poor generalization, and due to impostors in nonlinear multi-modal world. This work learns a domain generic person re-identification to resolve real-world challenges and to identify the target person undergoing appearance changes when moving across different indoor and outdoor environments or domains. Our generic metric takes advantage of novel attention mechanism to learn deep cross-representations to address pose, viewpoint, and illumination variations, as well as jointly tackling impostors and style variations the target person randomly undergoes in various indoor and outdoor domains; thus, our generic metric attains higher recognition accuracy of target person identification in complex multi-modal open-set world, and attains 80.73% and 64.44% (Formula presented.) -1 identification in multi-modal close-set PRID and VIPeR domains, respectively.
CITATION STYLE
Syed, M. A., Ou, Y., Li, T., & Jiang, G. (2023). Lightweight Multimodal Domain Generic Person Reidentification Metric for Person-Following Robots. Sensors, 23(2). https://doi.org/10.3390/s23020813
Mendeley helps you to discover research relevant for your work.