Hydranet: Data augmentation for regression neural networks

N/ACitations
Citations of this article
49Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep learning techniques are often criticized to heavily depend on a large quantity of labeled data. This problem is even more challenging in medical image analysis where the annotator expertise is often scarce. We propose a novel data-augmentation method to regularize neural network regressors that learn from a single global label per image. The principle of the method is to create new samples by recombining existing ones. We demonstrate the performance of our algorithm on two tasks: estimation of the number of enlarged perivascular spaces in the basal ganglia, and estimation of white matter hyperintensities volume. We show that the proposed method improves the performance over more basic data augmentation. The proposed method reached an intraclass correlation coefficient between ground truth and network predictions of 0.73 on the first task and 0.84 on the second task, only using between 25 and 30 scans with a single global label per scan for training. With the same number of training scans, more conventional data augmentation methods could only reach intraclass correlation coefficients of 0.68 on the first task, and 0.79 on the second task.

Cite

CITATION STYLE

APA

Dubost, F., Bortsova, G., Adams, H., Ikram, M. A., Niessen, W., Vernooij, M., & de Bruijne, M. (2019). Hydranet: Data augmentation for regression neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11767 LNCS, pp. 438–446). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-32251-9_48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free