The spinal tree adjoining grammar (TAG) parsing model of [Carreras 08] achieves the current state-of-the-art constituent parsing accuracy on the commonly used English Penn Treebank evaluation setting. Unfortunately, the model has the serious drawback of low parsing efficiency since its Eisner-CKY style parsing algorithm needs O(n4) computation time for input length n. This paper investigates a more practical solution and presents a beam search shift-reduce algorithm for spinal TAG parsing. Since the algorithm works in O(bn) (b is beam width), it can be expected to provide a significant improvement in parsing speed. However, to achieve faster parsing, it needs to prune a large number of candidates in an exponentially large search space and often suffers from severe search errors. In fact, our experiments show that the basic beam search shift-reduce parser does not work well for spinal TAGs. To alleviate this problem, we extend the proposed shift-reduce algorithm with two techniques: Dynamic Programming of [Huang 10a] and Supertagging. The proposed extended parsing algorithm is about 8 times faster than the Berkeley parser, which is well-known to be fast constituent parsing software, while offering state-of-the-art performance. Moreover, we conduct experiments on the Keyaki Treebank for Japanese to show that the good performance of our proposed parser is language-independent.
CITATION STYLE
Hayashi, K., Suzuki, J., & Nagata, M. (2016). Shift-reduce spinal TAG parsing with dynamic programming. Transactions of the Japanese Society for Artificial Intelligence, 31(2). https://doi.org/10.1527/tjsai.J-F83
Mendeley helps you to discover research relevant for your work.