How to train goodword embeddings for biomedical nlp

289Citations
Citations of this article
334Readers
Mendeley users who have this article in their library.

Abstract

The quality of word embeddings depends on the input corpora, model architectures, and hyper-parameter settings. Using the state-of-the-art neural embedding tool word2vec and both intrinsic and extrinsic evaluations, we present a comprehensive study of how the quality of embeddings changes according to these features. Apart from identifying the most influential hyper-parameters, we also observe one that creates contradictory results between intrinsic and extrinsic evaluations. Furthermore, we find that bigger corpora do not necessarily produce better biomedical domain word embeddings. We make our evaluation tools and resources as well as the created state-of-the-art word embeddings available under open licenses from https://github.com/ cambridgeltl/BioNLP-2016.

Cite

CITATION STYLE

APA

Chiu, B., Crichton, G., Korhonen, A., & Pyysalo, S. (2016). How to train goodword embeddings for biomedical nlp. In BioNLP 2016 - Proceedings of the 15th Workshop on Biomedical Natural Language Processing (pp. 166–174). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w16-2922

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free