Learning to predict distributions of words across domains

16Citations
Citations of this article
122Readers
Mendeley users who have this article in their library.

Abstract

Although the distributional hypothesis has been applied successfully in many natural language processing tasks, systems using distributional information have been limited to a single domain because the distribution of a word can vary between domains as the word's predominant meaning changes. However, if it were possible to predict how the distribution of a word changes from one domain to another, the predictions could be used to adapt a system trained in one domain to work in another. We propose an unsupervised method to predict the distribution of a word in one domain, given its distribution in another domain. We evaluate our method on two tasks: cross-domain partof- speech tagging and cross-domain sentiment classification. In both tasks, our method significantly outperforms competitive baselines and returns results that are statistically comparable to current stateof- the-art methods, while requiring no task-specific customisations. © 2014 Association for Computational Linguistics.

Cite

CITATION STYLE

APA

Bollegala, D., Weir, D., & Carroll, J. (2014). Learning to predict distributions of words across domains. In 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014 - Proceedings of the Conference (Vol. 1, pp. 613–623). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/p14-1058

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free