Do we need bigram alignment models? On the effect of alignment quality on transduction accuracy in G2P

1Citations
Citations of this article
86Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We investigate the need for bigram alignment models and the benefit of supervised alignment techniques in graphemeto-phoneme (G2P) conversion. Moreover, we quantitatively estimate the relationship between alignment quality and overall G2P system performance. We find that, in English, bigram alignment models do perform better than unigram alignment models on the G2P task. Moreover, we find that supervised alignment techniques may perform considerably better than their unsupervised brethren and that few manually aligned training pairs suffice for them to do so. Finally, we estimate a highly significant impact of alignment quality on overall G2P transcription performance and that this relationship is linear in nature.

Cite

CITATION STYLE

APA

Eger, S. (2015). Do we need bigram alignment models? On the effect of alignment quality on transduction accuracy in G2P. In Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing (pp. 1175–1185). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d15-1139

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free