Lightweight Multimodal Domain Generic Person Reidentification Metric for Person-Following Robots

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Recently, person-following robots have been increasingly used in many real-world applications, and they require robust and accurate person identification for tracking. Recent works proposed to use re-identification metrics for identification of the target person; however, these metrics suffer due to poor generalization, and due to impostors in nonlinear multi-modal world. This work learns a domain generic person re-identification to resolve real-world challenges and to identify the target person undergoing appearance changes when moving across different indoor and outdoor environments or domains. Our generic metric takes advantage of novel attention mechanism to learn deep cross-representations to address pose, viewpoint, and illumination variations, as well as jointly tackling impostors and style variations the target person randomly undergoes in various indoor and outdoor domains; thus, our generic metric attains higher recognition accuracy of target person identification in complex multi-modal open-set world, and attains 80.73% and 64.44% (Formula presented.) -1 identification in multi-modal close-set PRID and VIPeR domains, respectively.

Cite

CITATION STYLE

APA

Syed, M. A., Ou, Y., Li, T., & Jiang, G. (2023). Lightweight Multimodal Domain Generic Person Reidentification Metric for Person-Following Robots. Sensors, 23(2). https://doi.org/10.3390/s23020813

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free