Abstract
Advances in transfer learning have let go the limitations of traditional supervised machine learning algorithms for being dependent on annotated training data for training new models for every new domain. However, several applications encounter scenarios where models need to transfer/adapt across domains when the label sets vary both in terms of count of labels as well as their connotations. This paper presents first-of-its-kind transfer learning algorithm for cross-domain classification with multiple source domains and disparate label sets. It starts with identifying transferable knowledge from across multiple domains that can be useful for learning the target domain task. This knowledge in the form of selective labeled instances from different domains is congregated to form an auxiliary training set which is used for learning the target domain task. Experimental results validate the efficacy of the proposed algorithm against strong baselines on a real world social media and the 20 Newsgroups datasets.
Cite
CITATION STYLE
Bhatt, H. S., Sinha, M., & Roy, S. (2016). Cross-domain text classification with multiple domains and disparate label sets. In 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers (Vol. 3, pp. 1641–1650). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p16-1155
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.