Semi-supervised Semantic Segmentation with Mutual Knowledge Distillation

19Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Consistency regularization has been widely studied in recent semi- supervised semantic segmentation methods, and promising per- formance has been achieved. In this work, we propose a new con- sistency regularization framework, termed mutual knowledge dis- tillation (MKD), combined with data and feature augmentation. We introduce two auxiliary mean-teacher models based on consis- tency regularization. More specifically, we use the pseudo-labels generated by a mean teacher to supervise the student network to achieve a mutual knowledge distillation between the two branches. In addition to using image-level strong and weak augmentation, we also discuss feature augmentation. This involves considering various sources of knowledge to distill the student network. Thus, we can significantly increase the diversity of the training samples. Experiments on public benchmarks show that our framework out- performs previous state-of-the-art (SOTA) methods under various semi-supervised settings. Code is available at https://github.com/jianlong-yuan/semi-mmseg.

Cite

CITATION STYLE

APA

Yuan, J., Ge, J., Wang, Z., & Liu, Y. (2023). Semi-supervised Semantic Segmentation with Mutual Knowledge Distillation. In MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia (pp. 5436–5444). Association for Computing Machinery, Inc. https://doi.org/10.1145/3581783.3611906

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free