Unified Model for Code-Switching Speech Recognition and Language Identification Based on Concatenated Tokenizer

7Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Code-Switching (CS) multilingual Automatic Speech Recognition (ASR) models can transcribe speech containing two or more alternating languages during a conversation. This paper proposes (1) a new method for creating code-switching ASR datasets from purely monolingual data sources, and (2) a novel Concatenated Tokenizer that enables ASR models to generate language ID for each emitted text token while reusing existing monolingual tokenizers. The efficacy of these approaches for building CS ASR models is demonstrated for two language pairs, English-Hindi and English-Spanish, where we achieve new state-of-the-art results on the Miami Bangor CS evaluation corpus. In addition to competitive ASR performance, the proposed Concatenated Tokenizer models are highly effective for spoken language identification, achieving 98%+ accuracy on the out-of-distribution FLEURS dataset.

Cite

CITATION STYLE

APA

Dhawan, K., Rekesh, D., & Ginsburg, B. (2023). Unified Model for Code-Switching Speech Recognition and Language Identification Based on Concatenated Tokenizer. In CALCS 2023 - Computational Approaches to Linguistic Code-Switching, Proceedings of the Workshop (pp. 74–82). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.calcs-1.7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free