Interleaved text/image deep mining on a large-scale radiology image database

2Citations
Citations of this article
108Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Exploiting and effective learning on very large-scale (>100K patients) medical image databases have been amajor challenge in spite of noteworthy progress in computer vision. This chapter suggests an interleaved text/image deep learning system to extract and mine the semantic interactions of radiologic images and reports, from a national research hospital’s Picture Archiving and Communication System. This chapter introduces a method to perform unsupervised learning (e.g., latent Dirichlet allocation, feedforward/recurrent neural net language models) on document- and sentence-level texts to generate semantic labels and supervised deep ConvNets with categorization and cross-entropy loss functions to map from images to label spaces.Keywords can be predicted for images in a retrievalmanner, and presence/ absence of some frequent types of disease can be predicted with probabilities. The large-scale datasets of extracted key images and their categorization, embedded vector labels, and sentence descriptions can be harnessed to alleviate deep learning’s “data-hungry” challenge in the medical domain.

Cite

CITATION STYLE

APA

Shin, H. C., Lu, L., Kim, L., Seff, A., Yao, J., & Summers, R. (2017). Interleaved text/image deep mining on a large-scale radiology image database. In Advances in Computer Vision and Pattern Recognition (pp. 305–321). Springer London. https://doi.org/10.1007/978-3-319-42999-1_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free