CycleGAN-Based Image Translation for Near-Infrared Camera-Trap Image Recognition

1Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Due to its invisibility, NIR (Near-infrared) flash has been widely used to capture the images of wild animals in the night. Although the animals can be captured without notice, the gray NIR images are short of color and texture information and thus is difficult to analyze, for both human and machine. In this paper, we propose to use CycleGAN (Generative Adversarial Networks) to translate NIR image to the incandescent domain for visual quality enhancement. Example translations show that both color and texture can be well recovered by the proposed CycleGAN model. The recognition performance of a SSD based detector on the translated incandescent images is also significantly better than that on the original NIR images. Taking Wildebeest and Zebra for example, an increase of 16 % and 8 % in recognition accuracy has been observed.

Cite

CITATION STYLE

APA

Gao, R., Zheng, S., He, J., & Shen, L. (2020). CycleGAN-Based Image Translation for Near-Infrared Camera-Trap Image Recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12068 LNCS, pp. 453–464). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-59830-3_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free