RDPD: Rich data helps poor data via imitation

4Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

Abstract

In many situations, we need to build and deploy separate models in related environments with different data qualities. For example, an environment with strong observation equipments (e.g., intensive care units) often provides high-quality multimodal data, which are acquired from multiple sensory devices and have rich-feature representations. On the other hand, an environment with poor observation equipment (e.g., at home) only provides low-quality, uni-modal data with poor-feature representations. To deploy a competitive model in a poor-data environment without requiring direct access to multi-modal data acquired from a rich-data environment, this paper develops and presents a knowledge distillation (KD) method (RDPD) to enhance a predictive model trained on poor data using knowledge distilled from a high-complexity model trained on rich, private data. We evaluated RDPD on three real-world datasets and shown that its distilled model consistently outperformed all baselines across all datasets, especially achieving the greatest performance improvement over a model trained only on low-quality data by 24.56% on PR-AUC and 12.21% on ROC-AUC, and over that of a state-of-the-art KD model by 5.91% on PR-AUC and 4.44% on ROC-AUC.

Cite

CITATION STYLE

APA

Hong, S., Xiao, C., Hoang, T. N., Ma, T., Li, H., & Sun, J. (2019). RDPD: Rich data helps poor data via imitation. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 5895–5901). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/817

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free