Information retrieval and spoken-term detection from audio such as broadcast news, telephone conversations, conference calls, and meetings are of great interest to the academic, government, and business communities. Motivated by the requirement for high-quality indexes, this study explores the effect of using both word and sub-word information to find in-vocabulary and OOV query terms. It also explores the trade-off between search accuracy and the speed of audio transcription. We present a novel, vocabulary independent, hybrid LVCSR approach to audio indexing and search and show that using phonetic confusions derived from posterior probabilities estimated by a neural network in the retrieval of OOV queries can help in reducing misses. These methods are evaluated on data sets from the 2006 NIST STD task.
CITATION STYLE
Ramabhadran, B., Sethy, A., Mamou, J., Kingsbury, B., & Chaudhari, U. (2009). Fast decoding for open vocabulary spoken term detection. In NAACL-HLT 2009 - Human Language Technologies: 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Short Papers (pp. 277–280). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1620853.1620930
Mendeley helps you to discover research relevant for your work.