Learning task-specific and shared representations in medical imaging

0Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The performance of multi-task learning hinges on the design of feature sharing between tasks; a process which is combinatorial in the network depth and task count. Hand-crafting an architecture based on human intuitions of task relationships is therefore suboptimal. In this paper, we present a probabilistic approach to learning task-specific and shared representations in Convolutional Neural Networks (CNNs) for multi-task learning of semantic tasks. We introduce Stochastic Filter Groups; which is a mechanism that groups convolutional kernels into task-specific and shared groups to learn an optimal kernel allocation. They facilitate learning optimal shared and task specific representations. We employ variational inference to learn the posterior distribution over the possible grouping of kernels and CNN weights. Experiments on MRI-based prostate radiotherapy organ segmentation and CT synthesis demonstrate that the proposed method learns optimal task allocations that are inline with human-optimised networks whilst improving performance over competing baselines.

Cite

CITATION STYLE

APA

Bragman, F. J. S., Tanno, R., Ourselin, S., Alexander, D. C., & Cardoso, M. J. (2019). Learning task-specific and shared representations in medical imaging. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11767 LNCS, pp. 374–383). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-32251-9_41

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free