Recently, sequence-to-sequence models have achieved impressive performance on a number of semantic parsing tasks. However, they often do not exploit available linguistic resources, while these, when employed correctly, are likely to increase performance even further. Research in neural machine translation has shown that employing this information has a lot of potential, especially when using a multi-encoder setup. We employ a range of semantic and syntactic resources to improve performance for the task of Discourse Representation Structure Parsing. We show that (i) linguistic features can be beneficial for neural semantic parsing and (ii) the best method of adding these features is by using multiple encoders.
CITATION STYLE
van Noord, R., Toral, A., & Bos, J. (2019). Linguistic information in neural semantic parsing with multiple encoders. In IWCS 2019 - Proceedings of the 13th International Conference on Computational Semantics - Short Papers (pp. 24–31). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-0504
Mendeley helps you to discover research relevant for your work.