Speaker-dependent Malay vowel recognition for a child with articulation disorder using multi-layer perceptron

6Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper investigates the use of Neural Network in recognizing six Malay vowels of a child with articulation disorder in a speaker-dependent manner. The child is identified to have articulation errors in producing consonant sounds but not in vowel sounds. The speech sounds were recorded at a sampling rate of 20kHz with 16-bit resolution. Linear Predictive Coding was used to extract 24 speech features coeeficients from a segment of 20ms to 100 ms. The LPC coefficients were converted into cepstral coefficients before being fed into a Multi-layer Perceptron with one hidden layer for training and testing. The Multi-layer Perceptron was able to recognize the all speech sounds. © 2008 Springer-Verlag.

Cite

CITATION STYLE

APA

Ting, H. N., & Mark, K. M. (2008). Speaker-dependent Malay vowel recognition for a child with articulation disorder using multi-layer perceptron. In IFMBE Proceedings (Vol. 21 IFMBE, pp. 238–241). Springer Verlag. https://doi.org/10.1007/978-3-540-69139-6_62

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free