Unsupervised Lexicon Discovery from Acoustic Input

  • Lee C
  • O’Donnell T
  • Glass J
N/ACitations
Citations of this article
123Readers
Mendeley users who have this article in their library.

Abstract

We present a model of unsupervised phonological lexicon discovery—the problem of simultaneously learning phoneme-like and word-like units from acoustic input. Our model builds on earlier models of unsupervised phone-like unit discovery from acoustic data (Lee and Glass, 2012), and unsupervised symbolic lexicon discovery using the Adaptor Grammar framework (Johnson et al., 2006), integrating these earlier approaches using a probabilistic model of phonological variation. We show that the model is competitive with state-of-the-art spoken term discovery systems, and present analyses exploring the model’s behavior and the kinds of linguistic structures it learns.

Cite

CITATION STYLE

APA

Lee, C., O’Donnell, T. J., & Glass, J. (2015). Unsupervised Lexicon Discovery from Acoustic Input. Transactions of the Association for Computational Linguistics, 3, 389–403. https://doi.org/10.1162/tacl_a_00146

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free