Dual supervised learning

55Citations
Citations of this article
392Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many supervised learning tasks are emerged in dual forms, e.g., English-to-French translation vs. French-to-English translation, speech recognition vs. text to speech, and image classification vs. image generation. Two dual tasks have intrinsic connections with each other due to the probabilistic correlation between their models. This connection is, however, not effectively utilized today, since people usually train the models of two dual tasks separately and independently. In this work, we propose training the models of two dual tasks simultaneously, and explicitly exploiting the probabilistic correlation between them to regularize the training process. For ease of reference, we call the proposed approach dual supervised learning. We demonstrate that dual supervised learning can improve the practical performances of both tasks, for various applications including machine translation, image processing, and sentiment analysis.

Cite

CITATION STYLE

APA

Xia, Y., Qin, T., Chen, W., Bian, J., Yu, N., & Liu, T. Y. (2017). Dual supervised learning. In 34th International Conference on Machine Learning, ICML 2017 (Vol. 8, pp. 5792–5803). International Machine Learning Society (IMLS). https://doi.org/10.1007/978-981-15-8884-6_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free