Neural discourse models proposed so far are very sophisticated and tuned specifically to certain label sets. These are effective, but unwieldy to deploy or re-purpose for different label sets or languages. Here, we propose a robust neural classifier for non-explicit discourse relations for both English and Chinese in CoNLL 2016 Shared Task datasets. Our model only requires word vectors and simple feed-forward training procedure, which we have previously shown to work better than some of the more sophisticated neural architecture such as long-short term memory model. Our Chinese model outperforms feature-based model and performs competitively against other teams. Our model obtains the state-of-the-art results on the English blind test set, which is used as the main criteria in this competition.
CITATION STYLE
Rutherford, A. T., & Xue, N. (2016). Robust non-explicit neural discourse parser in English and Chinese. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning: Shared Task, CoNLL 2016 (pp. 55–59). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/k16-2007
Mendeley helps you to discover research relevant for your work.