Domain-aware neural model for sequence labeling using joint learning

0Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently, scholars have demonstrated empirical successes of deep learning in sequence labeling, and most of the prior works focused on the word representation inside the target sentence. Unfortunately, the global information, e.g., domain information of the target document, were ignored in the previous studies. In this paper, we propose an innovative joint learning neural network which can encapsulate the global domain knowledge and the local sentence/token information to enhance the sequence labeling model. Unlike existing studies, the proposed method employs domain labeling output as a latent evidence to facilitate tagging model and such joint embedding information is generated by an enhanced highway network. Meanwhile, a redesigned CRF layer is deployed to bridge the 'local output labels' and 'global domain information'. Various kinds of information can iteratively contribute to each other, and moreover, domain knowledge can be learnt in either supervised or unsupervised environment via the new model. Experiment with multiple data sets shows that the proposed algorithm outperforms classical and most recent state-of-the-art labeling methods.

Cite

CITATION STYLE

APA

Huang, H., Yan, Y., & Liu, X. (2019). Domain-aware neural model for sequence labeling using joint learning. In The Web Conference 2019 - Proceedings of the World Wide Web Conference, WWW 2019 (pp. 2837–2843). Association for Computing Machinery, Inc. https://doi.org/10.1145/3308558.3313566

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free