Learning task-specific representation for novel words in sequence labeling

5Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Word representation is a key component in neural-network-based sequence labeling systems. However, representations of unseen or rare words trained on the end task are usually poor for appreciable performance. This is commonly referred to as the out-of-vocabulary (OOV) problem. In this work, we address the OOV problem in sequence labeling using only training data of the task. To this end, we propose a novel method to predict representations for OOV words from their surface-forms (e.g., character sequence) and contexts. The method is specifically designed to avoid the error propagation problem suffered by existing approaches in the same paradigm. To evaluate its effectiveness, we performed extensive empirical studies on four part-of-speech tagging (POS) tasks and four named entity recognition (NER) tasks. Experimental results show that the proposed method can achieve better or competitive performance on the OOV problem compared with existing state-of-the-art methods.

Cite

CITATION STYLE

APA

Peng, M., Zhang, Q., Xing, X., Gui, T., Fu, J., & Huang, X. (2019). Learning task-specific representation for novel words in sequence labeling. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 5146–5152). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/715

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free