JoeyS2T: Minimalistic Speech-to-Text Modeling with JoeyNMT

0Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

JoeyS2T is a JoeyNMT (Kreutzer et al., 2019) extension for speech-to-text tasks such as automatic speech recognition and end-to-end speech translation. It inherits the core philosophy of JoeyNMT, a minimalist NMT toolkit built on PyTorch, seeking simplicity and accessibility. JoeyS2T's workflow is self-contained, starting from data pre-processing, over model training and prediction to evaluation, and is seamlessly integrated into JoeyNMT's compact and simple code base. On top of JoeyNMT's state-of-the-art Transformer-based encoder-decoder architecture, JoeyS2T provides speech-oriented components such as convolutional layers, SpecAugment, CTC-loss, and WER evaluation. Despite its simplicity compared to prior implementations, JoeyS2T performs competitively on English speech recognition and English-to-German speech translation benchmarks. The implementation is accompanied by a walk-through tutorial and available on https://github.com/may-/joeys2t.

Cite

CITATION STYLE

APA

Ohta, M., Kreutzer, J., & Riezler, S. (2022). JoeyS2T: Minimalistic Speech-to-Text Modeling with JoeyNMT. In EMNLP 2022 - 2022 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Demonstrations Session (pp. 50–59). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-demos.6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free