A rotation and translation invariant method for 3D organ image classification using deep convolutional neural networks

9Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Three-dimensional (3D) medical image classification is useful in applications such as disease diagnosis and content-based medical image retrieval. It is a challenging task due to several reasons. First, image intensity values are vastly different depending on the image modality. Second, intensity values within the same image modality may vary depending on the imaging machine and artifacts may also be introduced in the imaging process. Third, processing 3D data requires high computational power. In recent years, significant research has been conducted in the field of 3D medical image classification. However, most of these make assumptions about patient orientation and imaging direction to simplify the problem and/or work with the full 3D images. As such, they perform poorly when these assumptions are not met. In this paper, we propose a method of classification for 3D organ images that is rotation and translation invariant. To this end, we extract a representative two-dimensional (2D) slice along the plane of best symmetry from the 3D image. We then use this slice to represent the 3D image and use a 20-layer deep convolutional neural network (DCNN) to perform the classification task. We show experimentally, using multi-modal data, that our method is comparable to existing methods when the assumptions of patient orientation and viewing direction are met. Notably, it shows similarly high accuracy even when these assumptions are violated, where other methods fail. We also explore how this method can be used with other DCNN models as well as conventional classification approaches.

References Powered by Scopus

Deep residual learning for image recognition

176503Citations
N/AReaders
Get full text

Going deeper with convolutions

39860Citations
N/AReaders
Get full text

THRESHOLD SELECTION METHOD FROM GRAY-LEVEL HISTOGRAMS.

34902Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Identifying diabetic retinopathy from OCT images using deep transfer learning with artificial neural networks

41Citations
N/AReaders
Get full text

Rupture risk prediction of cerebral aneurysms using a novel convolutional neural network-based deep learning model

22Citations
N/AReaders
Get full text

Survey of methods and principles in three-dimensional reconstruction from two-dimensional medical images

9Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Islam, K. T., Wijewickrema, S., & O’Leary, S. (2019). A rotation and translation invariant method for 3D organ image classification using deep convolutional neural networks. PeerJ Computer Science, 2019(3). https://doi.org/10.7717/peerj-cs.181

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 6

75%

Professor / Associate Prof. 1

13%

Researcher 1

13%

Readers' Discipline

Tooltip

Computer Science 4

40%

Engineering 4

40%

Biochemistry, Genetics and Molecular Bi... 1

10%

Business, Management and Accounting 1

10%

Save time finding and organizing research with Mendeley

Sign up for free