Improving sequence-to-sequence constituency parsing

11Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

Abstract

Sequence-to-sequence constituency parsing casts the tree-structured prediction problem as a general sequential problem by top-down tree linearization, and thus it is very easy to train in parallel with distributed facilities. Despite its success, it relies on a probabilistic attention mechanism for a general purpose, which can not guarantee the selected context to be informative in the specific parsing scenario. Previous work introduced a deterministic attention to select the informative context for sequence-to-sequence parsing, but it is based on the bottom-up linearization even if it was observed that top-down linearization is better than bottom-up linearization for standard sequence-to-sequence constituency parsing. In this paper, we thereby extend the deterministic attention to directly conduct on the top-down tree linearization. Intensive experiments show that our parser delivers substantial improvements over the bottom-up linearization in accuracy, and it achieves 92.3 Fscore on the Penn English Treebank section 23 and 85.4 Fscore on the Penn Chinese Treebank test dataset, without reranking or semi-supervised training.

Cite

CITATION STYLE

APA

Liu, L., Zhu, M., & Shi, S. (2018). Improving sequence-to-sequence constituency parsing. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 4873–4880). AAAI press. https://doi.org/10.1609/aaai.v32i1.11917

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free