Abstract
The rapid proliferation of Internet of Medical Things (IoMT) devices has generated unprecedented volumes of multi-modal medical data, presenting significant challenges in efficient signal processing and transmission. Traditional compression approaches either process each modality independently, ignoring valuable cross-modal relationships, or fail to capture the complex temporal and channel dependencies within individual signals. We present TransMCS, a novel hybrid CNN-Transformer architecture for multi-modal medical signal compressive sensing with five key components: (1) modality-specific representation learning through parallel pathways capturing temporal and channel-wise dependencies; (2) modality-specific compression; (3) cross-attention for adaptive modal fusion; (4) modality-specific decompression; and (5) targeted intermediate reconstruction refinement. Extensive validation on UCI-HAR and Ninapro DB7 datasets demonstrates that TransMCS outperforms state-of-the-art methods with up to 8.31% improvement in R2 at high compression ratios. Ablation studies confirm the effectiveness of our architectural design choices for multi-modal medical signal compression.
Author supplied keywords
Cite
CITATION STYLE
Zhang, Y., Xiao, X., & Guo, J. (2025). TransMCS: A hybrid CNN-transformer autoencoder for end-to-end multi-modal medical signals compressive sensing. Theoretical Computer Science, 1051. https://doi.org/10.1016/j.tcs.2025.115409
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.