SCMT: Self-Correction Mean Teacher for Semi-supervised Object Detection

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Semi-Supervised Object Detection (SSOD) aims to improve performance by leveraging a large amount of unlabeled data. Existing works usually adopt the teacher-student framework to enforce student to learn consistent predictions over the pseudo-labels generated by teacher. However, the performance of the student model is limited since the noise inherently exists in pseudo-labels. In this paper, we investigate the causes and effects of noisy pseudo-labels and propose a simple yet effective approach denoted as Self-Correction Mean Teacher (SCMT) to reduce the adverse effects. Specifically, we propose to dynamically re-weight the unsupervised loss of each student's proposal with additional supervision information from the teacher model, and assign smaller loss weights to possible noisy proposals. Extensive experiments on MS-COCO benchmark have shown the superiority of our proposed SCMT, which can significantly improve the supervised baseline by more than 11% mAP under all 1%, 5% and 10% COCO-standard settings, and surpasses state-of-the-art methods by about 1.5% mAP. Even under the challenging COCO-additional setting, SCMT still improves the supervised baseline by 4.9% mAP, and significantly outperforms previous methods by 1.2% mAP, achieving a new state-of-the-art performance.

Cite

CITATION STYLE

APA

Xiong, F., Tian, J., Hao, Z., He, Y., & Ren, X. (2022). SCMT: Self-Correction Mean Teacher for Semi-supervised Object Detection. In IJCAI International Joint Conference on Artificial Intelligence (pp. 1488–1494). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/207

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free