Better Sign Language Translation with STMC-Transformer

123Citations
Citations of this article
202Readers
Mendeley users who have this article in their library.

Abstract

Sign Language Translation (SLT) first uses a Sign Language Recognition (SLR) system to extract sign language glosses from videos. Then, a translation system generates spoken language translations from the sign language glosses. This paper focuses on the translation system and introduces the STMC-Transformer which improves on the current state-of-the-art by over 5 and 7 BLEU respectively on gloss-to-text and video-to-text translation of the PHOENIX-Weather 2014T dataset. On the ASLG-PC12 corpus, we report an increase of over 16 BLEU. We also demonstrate the problem in current methods that rely on gloss supervision. The video-to-text translation of our STMC-Transformer outperforms translation of GT glosses. This contradicts previous claims that GT gloss translation acts as an upper bound for SLT performance and reveals that glosses are an inefficient representation of sign language. For future SLT research, we therefore suggest an end-to-end training of the recognition and translation models, or using a different sign language annotation scheme.

Cite

CITATION STYLE

APA

Yin, K., & Read, J. (2020). Better Sign Language Translation with STMC-Transformer. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 5975–5989). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.525

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free