A Deep Learning-Based Model That Reduces Speed of Sound Aberrations for Improved in Vivo Photoacoustic Imaging

83Citations
Citations of this article
47Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Photoacoustic imaging (PAI) has attracted great attention as a medical imaging method. Typically, photoacoustic (PA) images are reconstructed via beamforming, but many factors still hinder the beamforming techniques in reconstructing optimal images in terms of image resolution, imaging depth, or processing speed. Here, we demonstrate a novel deep learning PAI that uses multiple speed of sound (SoS) inputs. With this novel method, we achieved SoS aberration mitigation, streak artifact removal, and temporal resolution improvement all at once in structural and functional in vivo PA images of healthy human limbs and melanoma patients. The presented method produces high-contrast PA images in vivo with reduced distortion, even in adverse conditions where the medium is heterogeneous and/or the data sampling is sparse. Thus, we believe that this new method can achieve high image quality with fast data acquisition and can contribute to the advance of clinical PAI.

Cite

CITATION STYLE

APA

Jeon, S., Choi, W., Park, B., & Kim, C. (2021). A Deep Learning-Based Model That Reduces Speed of Sound Aberrations for Improved in Vivo Photoacoustic Imaging. IEEE Transactions on Image Processing, 30, 8773–8784. https://doi.org/10.1109/TIP.2021.3120053

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free