Statistical multi-stream modeling of real-time MRI articulatory speech data

12Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper investigates different statistical modeling frameworks for articulatory speech data obtained using real-time (RT) magnetic resonance imaging (MRI). To quantitatively capture the spatio-temporal shaping process of the human vocal tract during speech production a multi-dimensional stream of direct image features is extracted automatically from the MRI recordings. The features are closely related, though not identical, to the tract variables commonly defined in the articulatory phonology theory. The modeling of the shaping process aims at decomposing the articulatory data streams into primitives by segmentation. A variety of approaches are investigated for carrying out the segmentation task including vector quantizers, Gaussian Mixture Models, Hidden Markov Models, and a coupled Hidden Markov Model. We evaluate the performance of the different segmentation schemes qualitatively with the help of a well understood data set which was used in an earlier study of inter-articulatory timing phenomena of American English nasal sounds. © 2010 ISCA.

Cite

CITATION STYLE

APA

Bresch, E., Katsamanis, A., Goldstein, L., & Narayanan, S. (2010). Statistical multi-stream modeling of real-time MRI articulatory speech data. In Proceedings of the 11th Annual Conference of the International Speech Communication Association, INTERSPEECH 2010 (pp. 1584–1587). International Speech Communication Association. https://doi.org/10.21437/interspeech.2010-460

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free