Comparison by Conversion: Reverse-Engineering UCCA from Syntax and Lexical Semantics

6Citations
Citations of this article
57Readers
Mendeley users who have this article in their library.

Abstract

Building robust natural language understanding systems will require a clear characterization of whether and how various linguistic meaning representations complement each other. To perform a systematic comparative analysis, we evaluate the mapping between meaning representations from different frameworks using two complementary methods: (i) a rule-based converter, and (ii) a supervised delexicalized parser that parses to one framework using only information from the other as features. We apply these methods to convert the STREUSLE corpus (with syntactic and lexical semantic annotations) to UCCA (a graph-structured full-sentence meaning representation). Both methods yield surprisingly accurate target representations, close to fully supervised UCCA parser quality—indicating that UCCA annotations are partially redundant with STREUSLE annotations. Despite this substantial convergence between frameworks, we find several important areas of divergence.

Cite

CITATION STYLE

APA

Hershcovich, D., Schneider, N., Dvir, D., Prange, J., de Lhoneux, M., & Abend, O. (2020). Comparison by Conversion: Reverse-Engineering UCCA from Syntax and Lexical Semantics. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 2947–2966). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.264

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free