Towards Universal Backward-Compatible Representation Learning

3Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.

Abstract

Conventional model upgrades for visual search systems require offline refreshment of gallery features by feeding gallery images into new models (dubbed as “backfill”), which is time-consuming and expensive, especially in large-scale applications. The task of backward-compatible representation learning [Shen et al., 2020] is therefore introduced to support backfill-free model upgrades, where the new query features are interoperable with the old gallery features. Despite the success, previous works only investigated a close-set training scenario (i.e., the new training set shares the same classes as the old one), and are limited by more realistic and challenging open-set scenarios. To this end, we first introduce a new problem of universal backward-compatible representation learning, covering all possible data split in model upgrades. We further propose a simple yet effective method, dubbed as Universal Backward-Compatible Training (UniBCT) with a novel structural prototype refinement algorithm, to learn compatible representations in all kinds of model upgrading benchmarks in a unified manner. Comprehensive experiments on the large-scale face recognition datasets MS1Mv3 and IJB-C fully demonstrate the effectiveness of our method. Source code is available at https://github.com/TencentARC/OpenCompatible.

Cite

CITATION STYLE

APA

Zhang, B., Ge, Y., Shen, Y., Su, S., Wu, F., Yuan, C., … Shan, Y. (2022). Towards Universal Backward-Compatible Representation Learning. In IJCAI International Joint Conference on Artificial Intelligence (pp. 1615–1621). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/225

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free