ThermalGAN: multimodal color-to-thermal image translation for person re-identification in multispectral dataset

60Citations
Citations of this article
83Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We propose a ThermalGAN framework for cross-modality color-thermal person re-identification (ReID). We use a stack of generative adversarial networks (GAN) to translate a single color probe image to a multimodal thermal probe set. We use thermal histograms and feature descriptors as a thermal signature. We collected a large-scale multispectral ThermalWorld dataset for extensive training of our GAN model. In total the dataset includes 20216 color-thermal image pairs, 516 person ID, and ground truth pixel-level object annotations. We made the dataset freely available (http://www.zefirus.org/ThermalGAN/ ). We evaluate our framework on the ThermalWorld dataset to show that it delivers robust matching that competes and surpasses the state-of-the-art in cross-modality color-thermal ReID.

Cite

CITATION STYLE

APA

Kniaz, V. V., Knyaz, V. A., Hladůvka, J., Kropatsch, W. G., & Mizginov, V. (2019). ThermalGAN: multimodal color-to-thermal image translation for person re-identification in multispectral dataset. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11134 LNCS, pp. 606–624). Springer Verlag. https://doi.org/10.1007/978-3-030-11024-6_46

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free