AMR Parsing is Far from Solved: GrAPES, the Granular AMR Parsing Evaluation Suite

6Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

We present the Granular AMR Parsing Evaluation Suite (GrAPES), a challenge set for Abstract Meaning Representation (AMR) parsing with accompanying evaluation metrics. AMR parsers now obtain high scores on the standard AMR evaluation metric Smatch, close to or even above reported inter-annotator agreement. But that does not mean that AMR parsing is solved; in fact, human evaluation in previous work indicates that current parsers still quite frequently make errors on node labels or graph structure that substantially distort sentence meaning. Here, we provide an evaluation suite that tests AMR parsers on a range of phenomena of practical, technical, and linguistic interest. Our 36 categories range from seen and unseen labels, to structural generalization, to coreference. GrAPES reveals in depth the abilities and shortcomings of current AMR parsers.

Cite

CITATION STYLE

APA

Groschwitz, J., Cohen, S. B., Donatelli, L., & Fowlie, M. (2023). AMR Parsing is Far from Solved: GrAPES, the Granular AMR Parsing Evaluation Suite. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 10728–10752). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.662

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free