SemEval-2016 task 8: Meaning representation parsing

52Citations
Citations of this article
97Readers
Mendeley users who have this article in their library.

Abstract

In this report we summarize the results of the SemEval 2016 Task 8: Meaning Representation Parsing. Participants were asked to generate Abstract Meaning Representation (AMR) (Banarescu et al., 2013) graphs for a set of English sentences in the news and discussion forum domains. Eleven sites submitted valid systems. The availability of state-of-the-art baseline systems was a key factor in lowering the bar to entry; many submissions relied on CAMR (Wang et al., 2015b; Wang et al., 2015a) as a baseline system and added extensions to it to improve scores. The evaluation set was quite difficult to parse, particularly due to creative approaches to word representation in the web forum portion. The top scoring systems scored 0.62 F1 according to the Smatch (Cai and Knight, 2013) evaluation heuristic. We show some sample sentences along with a comparison of system parses and perform quantitative ablative studies.

Cite

CITATION STYLE

APA

May, J. (2016). SemEval-2016 task 8: Meaning representation parsing. In SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings (pp. 1063–1073). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/s16-1166

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free