We build a reference for the task of Open Information Extraction, on five documents. We tentatively resolve a number of issues that arise, including coreference and granularity, and we take steps toward addressing inference, a significant problem. We seek to better pinpoint the requirements for the task. We produce our annotation guidelines specifying what is correct to extract and what is not. In turn, we use this reference to score existing Open IE systems. We address the nontrivial problem of evaluating the extractions produced by systems against the reference tuples, and share our evaluation script. Among seven compared extractors, we find the MinIE system to perform best.
CITATION STYLE
Léchelle, W., Gotti, F., & Langlais, P. (2019). Wire57: A fine-grained benchmark for open information extraction. In LAW 2019 - 13th Linguistic Annotation Workshop, Proceedings of the Workshop (pp. 6–15). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-4002
Mendeley helps you to discover research relevant for your work.