The low-light environment is integral to everyday activities but poses significant challenges in object detection. Due to the low brightness, noise, and insufficient illumination of the acquired image, the model's object detection performance is reduced. Opposing recent studies mainly developing using supervised learning models, this paper suggests LIDA-YOLO, an approach for unsupervised adaptation of low-illumination object detectors. The model improves the YOLOv3 by using normal illumination images as the source domain and low-illumination images as the target domain and achieves object detection in low-illumination images through an unsupervised learning strategy. Specifically, a multi-scale local feature alignment and global feature alignment module are proposed to align the overall attributes of the image and feature biases such as background, scene, and target layout are thus reduced. The experimental results of LIDA-YOLO on the ExDark dataset achieved the highest performance mAP score of 56.65% compared to several current state-of-the-art unsupervised domain adaptation object detection methods. Compared to I3Net, the performance improvement is 4.04%, and compared to OSHOT, the performance improvement is 6.5%. LIDA-YOLO achieves a performance improvement of 2.7% compared to the supervised baseline method YOLOv3. Overall, the suggested LIDA-YOLO model requires fewer samples and presents a stronger generalization ability than previous works.
CITATION STYLE
Xiao, Y., & Liao, H. (2024). LIDA-YOLO: An unsupervised low-illumination object detection based on domain adaptation. IET Image Processing, 18(5), 1178–1188. https://doi.org/10.1049/ipr2.13017
Mendeley helps you to discover research relevant for your work.