Although the distributional hypothesis has been applied successfully in many natural language processing tasks, systems using distributional information have been limited to a single domain because the distribution of a word can vary between domains as the word's predominant meaning changes. However, if it were possible to predict how the distribution of a word changes from one domain to another, the predictions could be used to adapt a system trained in one domain to work in another. We propose an unsupervised method to predict the distribution of a word in one domain, given its distribution in another domain. We evaluate our method on two tasks: cross-domain partof- speech tagging and cross-domain sentiment classification. In both tasks, our method significantly outperforms competitive baselines and returns results that are statistically comparable to current stateof- the-art methods, while requiring no task-specific customisations. © 2014 Association for Computational Linguistics.
CITATION STYLE
Bollegala, D., Weir, D., & Carroll, J. (2014). Learning to predict distributions of words across domains. In 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014 - Proceedings of the Conference (Vol. 1, pp. 613–623). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/p14-1058
Mendeley helps you to discover research relevant for your work.