Recent advances in distributed language modeling have led to large performance increases on a variety of natural language processing (NLP) tasks. However, it is not well understood how these methods may be augmented by knowledge-based approaches. This paper compares the performance and internal representation of an Enhanced Sequential Inference Model (ESIM) between three experimental conditions based on the representation method: Bidirectional Encoder Representations from Transformers (BERT), Embeddings of Semantic Predications (ESP), or Cui2Vec. The methods were evaluated on the Medical Natural Language Inference (MedNLI) subtask of the MEDIQA 2019 shared task. This task relied heavily on semantic understanding and thus served as a suitable evaluation set for the comparison of these representation methods.
CITATION STYLE
Kearns, W. R., Lau, W., & Thomas, J. A. (2019). UW-BHI at MEDIQA 2019: An analysis of representation methods for medical natural language inference. In BioNLP 2019 - SIGBioMed Workshop on Biomedical Natural Language Processing, Proceedings of the 18th BioNLP Workshop and Shared Task (pp. 500–509). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-5054
Mendeley helps you to discover research relevant for your work.