Raman spectroscopic deep learning with signal aggregated representations for enhanced cell phenotype and signature identification

7Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Feature representation is critical for data learning, particularly in learning spectroscopic data. Machine learning (ML) and deep learning (DL) models learn Raman spectra for rapid, nondestructive, and label-free cell phenotype identification, which facilitate diagnostic, therapeutic, forensic, and microbiological applications. But these are challenged by high-dimensional, unordered, and low-sample spectroscopic data. Here, we introduced novel 2D image-like dual signal and component aggregated representations by restructuring Raman spectra and principal components, which enables spectroscopic DL for enhanced cell phenotype and signature identification. New ConvNet models DSCARNets significantly outperformed the state-of-the-art (SOTA) ML and DL models on six benchmark datasets, mostly with >2% improvement over the SOTA performance of 85–97% accuracies. DSCARNets also performed well on four additional datasets against SOTA models of extremely high performances (>98%) and two datasets without a published supervised phenotype classification model. Explainable DSCARNets identified Raman signatures consistent with experimental indications.

Cite

CITATION STYLE

APA

Lu, S., Huang, Y., Shen, W. X., Cao, Y. L., Cai, M., Chen, Y., … Chen, Y. Z. (2024). Raman spectroscopic deep learning with signal aggregated representations for enhanced cell phenotype and signature identification. PNAS Nexus, 3(8). https://doi.org/10.1093/pnasnexus/pgae268

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free