Relation classification via BiLSTM-CNN

24Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In sentence-level relation classification field, both recurrent neural networks (RNN) and conventional neural networks (CNN) have won tremendous success. These methods do not rely on NLP systems like named entity recognizers (NER). However either CNN or RNN has its advantages and disadvantages for relation classification. For example, CNN is good at capturing local feature, but RNN is good at capturing temporal features, particularly handling long-distance dependency between nominal pairs. This paper proposes BiLSTM-CNN model combining CNN and RNN, and compares it with CNN and RNN respectively. BiLSTM-CNN utilizes LSTM to extract series of higher level phrase representations, and then fed into CNN to do the relation classification. We conducted exhaustive research on two datasets: SemEval-2010 Task 8 (https://docs.google.com/View?docid=dfvxd49s_36c28v9pmw) dataset and KBP37 (https://github.com/zhangdongxu/kbp37) dataset. The result strongly indicates the BiLSTM-CNN has the best performance among models in the literature, particularly for long-span relations. And on KBP37 dataset, we achieve the state-of-the-art F1-score.

Cite

CITATION STYLE

APA

Zhang, L., & Xiang, F. (2018). Relation classification via BiLSTM-CNN. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10943 LNCS, pp. 373–382). Springer Verlag. https://doi.org/10.1007/978-3-319-93803-5_35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free