SCOPE: Sign Language Contextual Processing with Embedding from LLMs

5Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Sign languages, used by around 70 million Deaf individuals globally, are visual languages that convey visual and contextual information. Current methods in vision-based sign language recognition (SLR) and translation (SLT) struggle with dialogue scenes due to limited dataset diversity and the neglect of contextually relevant information. To address these challenges, we introduce SCOPE (Sign language COntextual Processing with Embedding from LLMs), a novel context-aware vision-based SLR and SLT framework. For SLR, we utilize dialogue contexts through a multi-modal encoder to enhance gloss-level recognition. For subsequent SLT, we further fine-tune a Large Language Model (LLM) by incorporating prior conversational context. We also contribute a new sign language dataset that contains 72 hours of Chinese sign language videos in contextual dialogues across various scenarios. Experimental results demonstrate that our SCOPE framework achieves state-of-the-art performance on multiple datasets, including Phoenix-2014T, CSL-Daily, and our SCOPE dataset. Moreover, surveys conducted with participants from the Deaf community further validate the robustness and effectiveness of our approach in real-world applications.

Cite

CITATION STYLE

APA

Liu, Y., Zhang, W., Ren, S., Huang, C., Yu, J., & Xu, L. (2025). SCOPE: Sign Language Contextual Processing with Embedding from LLMs. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 39, pp. 5739–5747). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v39i6.32612

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free