TransMCS: A hybrid CNN-transformer autoencoder for end-to-end multi-modal medical signals compressive sensing

3Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The rapid proliferation of Internet of Medical Things (IoMT) devices has generated unprecedented volumes of multi-modal medical data, presenting significant challenges in efficient signal processing and transmission. Traditional compression approaches either process each modality independently, ignoring valuable cross-modal relationships, or fail to capture the complex temporal and channel dependencies within individual signals. We present TransMCS, a novel hybrid CNN-Transformer architecture for multi-modal medical signal compressive sensing with five key components: (1) modality-specific representation learning through parallel pathways capturing temporal and channel-wise dependencies; (2) modality-specific compression; (3) cross-attention for adaptive modal fusion; (4) modality-specific decompression; and (5) targeted intermediate reconstruction refinement. Extensive validation on UCI-HAR and Ninapro DB7 datasets demonstrates that TransMCS outperforms state-of-the-art methods with up to 8.31% improvement in R2 at high compression ratios. Ablation studies confirm the effectiveness of our architectural design choices for multi-modal medical signal compression.

Cite

CITATION STYLE

APA

Zhang, Y., Xiao, X., & Guo, J. (2025). TransMCS: A hybrid CNN-transformer autoencoder for end-to-end multi-modal medical signals compressive sensing. Theoretical Computer Science, 1051. https://doi.org/10.1016/j.tcs.2025.115409

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free