3D-CNNs for deep binary descriptor learning in medical volume data

7Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep convolutional neural networks achieve impressive results in many computer vision tasks not least because of their representation learning abilities. The translation of these findings to the medical domain with large volumetric data e.g. CT scans with typically ≥ 106 voxels is an important area of research. In particular for medical image registration, a standard analysis task, the supervised learning of expressive regional representations based on local greyvalue information is of importance to define a similarity metric. By providing discriminant binary features modern architectures can leverage special operations to compute hamming distance based similarity metrics. In this contribution we devise a 3D-Convolutional Neural Network (CNN) that can efficiently extract binary descriptors for Hamming distance-based metrics. We adopt the recently introduced Binary Tree Architectures and train a model using paired data with known correspondences. We employ a triplet objective term and extend the hinge loss with additional penalties for non-binary entries. The learned descriptors are shown to outperform state-of-the-art hand-crafted features on challenging COPD 3D-CT datasets and demonstrate their robustness for retrieval tasks under compression factors of ≈ 2000.

Cite

CITATION STYLE

APA

Blendowski, M., & Heinrich, M. P. (2018). 3D-CNNs for deep binary descriptor learning in medical volume data. In Informatik aktuell (Vol. 0, pp. 23–28). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-662-56537-7_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free