Semantic Communications for Image-Based Sign Language Transmission

1Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Semantic information representation in image-based communication often employs feature vectors, lacking interpretability and posing challenges for human comprehension. This paper addresses this challenge by exploring the reconstruction of original images in the context of American sign language (ASL) transmission. The conventional method involves decoding feature vectors through neural networks, introducing inefficiencies and complexities. To overcome these challenges, a novel system model for image-based semantic communications is presented, which utilizes a variant of the quadrature amplitude modulation (QAM) scheme, named 24-QAM. This modulation scheme is derived from the original 32-QAM constellation by removing 8 peripheral symbols and is proven capable of attaining superior error performance in ASL applications. Additionally, a semantic encoder based on a convolutional neural network (CNN) which effectively utilizes the ASL alphabet is presented. An original dataset is created by superimposing red-green-blue landmarks and key-points on top of the captured images; hence, enhancing the representation of hand posture. Finally, the training, testing, and communication performance of the proposed system is quantified through numerical results that highlight the achievable gains and trigger insightful discussions.

Cite

CITATION STYLE

APA

Kouvakis, V., Trevlakis, S. E., & Boulogeorgos, A. A. A. (2024). Semantic Communications for Image-Based Sign Language Transmission. IEEE Open Journal of the Communications Society, 5, 1088–1100. https://doi.org/10.1109/OJCOMS.2024.3360191

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free