Abstract
In spite of great success in many image recognition tasks achieved by recent deep models, directly applying them to recognize low-resolution images may suffer from low accuracy due to the missing of informative details during resolution degradation. However, these images are still recognizable for subjects who are familiar with the corresponding high-resolution ones. Inspired by that, we propose a teacherstudent learning approach to facilitate low-resolution image recognition via hybrid order relational knowledge distillation. The approach refers to three streams: the teacher stream is pretrained to recognize high-resolution images in high accuracy, the student stream is learned to identify low-resolution images by mimicking the teacher's behaviors, and the extra assistant stream is introduced as bridge to help knowledge transfer across the teacher to the student. To extract sufficient knowledge for reducing the loss in accuracy, the learning of student is supervised with multiple losses, which preserves the similarities in various order relational structures. In this way, the capability of recovering missing details of familiar low-resolution images can be effectively enhanced, leading to a better knowledge transfer. Extensive experiments on metric learning, low-resolution image classification and lowresolution face recognition tasks show the effectiveness of our approach, while taking reduced models.
Cite
CITATION STYLE
Ge, S., Zhang, K., Liu, H., Hua, Y., Zhao, S., Jin, X., & Wen, H. (2020). Look one and more: Distilling hybrid order relational knowledge for cross-resolution image recognition. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 10845–10852). AAAI press. https://doi.org/10.1609/aaai.v34i07.6715
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.