Exploration of the Impact of Maximum Entropy in Recurrent Neural Network Language Models for Code-Switching Speech

9Citations
Citations of this article
80Readers
Mendeley users who have this article in their library.

Abstract

This paper presents our latest investigations of the jointly trained maximum entropy and recurrent neural network language models for Code-Switching speech. First, we explore extensively the integration of part-of-speech tags and language identifier information in recurrent neural network language models for Code-Switching. Second, the importance of the maximum entropy model is demonstrated along with a various of experimental results. Finally, we propose to adapt the recurrent neural network language model to different Code-Switching behaviors and use them to generate artificial Code-Switching text data.

Cite

CITATION STYLE

APA

Vu, N. T., & Schultz, T. (2014). Exploration of the Impact of Maximum Entropy in Recurrent Neural Network Language Models for Code-Switching Speech. In 1st Workshop on Computational Approaches to Code Switching, Switching 2014 at the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014 - Proceedings (pp. 34–41). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/w14-3904

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free