The performance of object detection methods is adversely affected by the low-quality images caused by fog. The main reasons are as follows: (i) object detection methods are difficult to recognize and locate the objects due to weak discriminative features extracted from low-quality images; (ii) existing methods are hard to adapt to variable fog densities. The transmission map, one important component from the atmospheric scattering model, containing depth and fog density information, is the key to addressing the above two problems. In this paper, we propose a novel network using the transmission map guidance, termed TGNet, which mines depth information to infer the existence and locations of objects and mines fog density information to help adapt to various fog densities. Experiments conducted on the two real-world object detection datasets in foggy conditions (i.e., RTTS and FoggyDriving) demonstrate that our TGNet outperforms the state-of-the-art methods. Additionally, our TGNet provides consistent improvements on various detection paradigms and backbones.
CITATION STYLE
Luo, Z., Xie, J., & Nie, J. (2023). Object Detection in Foggy Images with Transmission Map Guidance. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14260 LNCS, pp. 150–162). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-44195-0_13
Mendeley helps you to discover research relevant for your work.