An empirical comparison of unsupervised constituency parsing methods

13Citations
Citations of this article
102Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Unsupervised constituency parsing aims to learn a constituency parser from a training corpus without parse tree annotations. While many methods have been proposed to tackle the problem, including statistical and neural methods, their experimental results are often not directly comparable due to discrepancies in datasets, data preprocessing, lexicalization, and evaluation metrics. In this paper, we first examine experimental settings used in previous work and propose to standardize the settings for better comparability between methods. We then empirically compare several existing methods, including decade-old and newly proposed ones, under the standardized settings on English and Japanese, two languages with different branching tendencies. We find that recent models do not show a clear advantage over decade-old models in our experiments. We hope our work can provide new insights into existing methods and facilitate future empirical evaluation of unsupervised constituency parsing.

Cite

CITATION STYLE

APA

Li, J., Cao, Y., Cai, J., Jiang, Y., & Tu, K. (2020). An empirical comparison of unsupervised constituency parsing methods. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 3278–3283). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.300

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free