LAMP: Large Deep Nets with Automated Model Parallelism for Image Segmentation

12Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep Learning (DL) models are becoming larger, because the increase in model size might offer significant accuracy gain. To enable the training of large deep networks, data parallelism and model parallelism are two well-known approaches for parallel training. However, data parallelism does not help reduce memory footprint per device. In this work, we introduce Large deep 3D ConvNets with Automated Model Parallelism (LAMP) and investigate the impact of both input’s and deep 3D ConvNets’ size on segmentation accuracy. Through automated model parallelism, it is feasible to train large deep 3D ConvNets with a large input patch, even the whole image. Extensive experiments demonstrate that, facilitated by the automated model parallelism, the segmentation accuracy can be improved through increasing model size and input context size, and large input yields significant inference speedup compared with sliding window of small patches in the inference. Code is available (https://monai.io/research/lamp-automated-model-parallelism).

Cite

CITATION STYLE

APA

Zhu, W., Zhao, C., Li, W., Roth, H., Xu, Z., & Xu, D. (2020). LAMP: Large Deep Nets with Automated Model Parallelism for Image Segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12264 LNCS, pp. 374–384). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-59719-1_37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free