Transs-driven joint learning architecture for implicit discourse relation recognition

31Citations
Citations of this article
107Readers
Mendeley users who have this article in their library.

Abstract

Implicit discourse relation recognition is a challenging task due to the lack of connectives as strong linguistic clues. Previous methods primarily encode two arguments separately or extract the specific interaction patterns for the task, which have not fully exploited the annotated relation signal. Therefore, we propose a novel TransS-driven joint learning architecture to address the issues. Specifically, based on the multi-level encoder, we 1) translate discourse relations in low-dimensional embedding space (called TransS), which could mine the latent geometric structure information of argument-relation instances; 2) further exploit the semantic features of arguments to assist discourse understanding; 3) jointly learn 1) and 2) to mutually reinforce each other to obtain the better argument representations, so as to improve the performance of the task. Extensive experimental results on the Penn Discourse TreeBank (PDTB) show that our model achieves competitive results against several state-of-the-art systems.

Cite

CITATION STYLE

APA

He, R., Wang, J., Guo, F., & Han, Y. (2020). Transs-driven joint learning architecture for implicit discourse relation recognition. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 139–148). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free