Abstract
We show that asymmetric models based on Tversky (1977) improve correlations with human similarity judgments and nearest neighbor discovery for both frequent and middle-rank words. In accord with Tversky's discovery that asymmetric similarity judgments arise when comparing sparse and rich representations, improvement on our two tasks can be traced to heavily weighting the feature bias toward the rarer word when comparing high- and midfrequency words. © 2014 Association for Computational Linguistics.
Cite
CITATION STYLE
Gawron, J. M. (2014). Improving sparse word similarity models with asymmetric measures. In 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014 - Proceedings of the Conference (Vol. 2, pp. 296–301). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/p14-2049
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.