Abstract
Distributional semantic models have trou-ble distinguishing strongly contrasting words (such as antonyms) from highly compatible ones (such as synonyms), be-cause both kinds tend to occur in similar contexts in corpora. We introduce the mul-titask Lexical Contrast Model (mLCM), an extension of the effective Skip-gram method that optimizes semantic vectors on the joint tasks of predicting corpus contexts and making the representations of WordNet synonyms closer than that of matching WordNet antonyms. mLCM outperforms Skip-gram both on general semantic tasks and on synonym/antonym discrimination, even when no direct lex-ical contrast information about the test words is provided during training. mLCM also shows promising results on the task of learning a compositional negation oper-ator mapping adjectives to their antonyms.
Cite
CITATION STYLE
The Pham, N., Lazaridou, A., & Baroni, M. (2015). A multitask objective to inject lexical contrast into distributional semantics. In ACL-IJCNLP 2015 - 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, Proceedings of the Conference (Vol. 2, pp. 21–26). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/P15-2004
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.