In recent years, deep neural networks have achieved significant success in relation classification and many other natural language processing tasks. However, existing neural networks for relation classification heavily rely on the quality of labelled data and tend to be overconfident about the noise in input signals. They may be limited in robustness and generalization. In this paper, we apply adversarial training to the relation classification by adding perturbations to the input vectors in bidirectional long short-term memory neural networks rather than to the original input itself. Besides, we propose an attention based gate module, which can not only discern the important information when learning the sentence representations but also adaptively concatenate sentence level and lexical level features. Experiments on the SemEval-2010 Task 8 benchmark dataset show that our model significantly outperforms other state-of-the-art models.
CITATION STYLE
Cao, P., Chen, Y., Liu, K., & Zhao, J. (2019). Adversarial training for relation classification with attention based gate mechanism. In Communications in Computer and Information Science (Vol. 957, pp. 91–102). Springer Verlag. https://doi.org/10.1007/978-981-13-3146-6_8
Mendeley helps you to discover research relevant for your work.