Learning lexicons from spoken utterances based on statistical model selection

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

This paper proposes a method for the unsupervised learning of lexicons from pairs of a spoken utterance and an object as its meaning under the condition that any priori linguistic knowledge other than acoustic models of Japanese phonemes is not used. The main problems are the word segmentation of spoken utterances and the learning of the phoneme sequences of the words. To obtain a lexicon, a statistical model, which represents the joint probability of an utterance and an object, is learned based on the minimum description length (MDL) principle. The model consists of three parts: a word list in which each word is represented by a phoneme sequence, a word-bigram model, and a word-meaning model. Through alternate learning processes of these parts, acoustically, grammatically, and semantically appropriate units of phoneme sequences that cover all utterances are acquired as words. Experimental results show that our model can acquire phoneme sequences of object words with about 83.6% accuracy.

Cite

CITATION STYLE

APA

Taguchi, R., Iwahashi, N., Funakoshi, K., Nakano, M., Nose, T., & Nitta, T. (2010). Learning lexicons from spoken utterances based on statistical model selection. Transactions of the Japanese Society for Artificial Intelligence, 25(4), 549–559. https://doi.org/10.1527/tjsai.25.549

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free