We study the impact of richer syntactic dependencies on the performance of the structured language model (SLM) along three dimensions: parsing accuracy (LP/LR), perplexity (PPL) and worderror- rate (WER, N-best re-scoring). We show that our models achieve an improvement in LP/LR, PPL and/or WER over the reported baseline results using the SLM on the UPenn Treebank and Wall Street Journal (WSJ) corpora, respectively. Analysis of parsing performance shows correlation between the quality of the parser (as measured by precision/ recall) and the language model performance (PPL and WER). A remarkable fact is that the enriched SLM outperforms the baseline 3-gram model in terms of WER by 10% when used in isolation as a second pass (N-best re-scoring) language model.
CITATION STYLE
Xu, P., Chelba, C., & Jelinek, F. (2002). A study on richer syntactic dependencies for structured language modeling. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2002-July, pp. 191–198). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1073083.1073116
Mendeley helps you to discover research relevant for your work.