Multi-Model Reuse is one of the prominent problems in Learnware [Zhou, 2016] framework, while the main issue of Multi-Model Reuse lies in the final prediction acquisition from the responses of multiple pre-trained models. Different from multi-classifiers ensemble, there are only pre-trained models rather than the whole training sets provided in Multi-Model Reuse configuration. This configuration is closer to the real applications where the reliability of each model cannot be evaluated properly. In this paper, aiming at the lack of evaluation on reliability, the potential consistency spread on different modalities is utilized. With the consistency of pre-trained models on different modalities, we propose a Pre-trained Multi-Model Reuse approach (PM2r) with multi-modal data, which realizes the reusability of multiple models. PM2r can combine pre-trained multi-models efficiently without re-training, and consequently no more training data storage is required. We describe the more realistic Multi-Model Reuse setting comprehensively in our paper, and point out the differences among this setting, classifier ensemble and later fusion on multi-modal learning. Experiments on synthetic and real-world datasets validate the effectiveness of PM2r when it is compared with state-of-the-art ensemble/multi-modal learning methods under this more realistic setting.
CITATION STYLE
Yang, Y., Zhan, D. C., Guo, X. Y., & Jiang, Y. (2017). Modal consistency based pre-trained Multi-Model Reuse. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 0, pp. 3287–3293). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2017/459
Mendeley helps you to discover research relevant for your work.