It is known that deeper and wider neural networks can achieve better accuracy. But it is difficult to continue the trend to increase model size due to limited GPU memory. One promising solution is to support swapping between GPU and CPU memory. However, existing work on swapping only handle certain models and do not achieve satisfactory performance. Deep learning computation is commonly expressed as a dataflow graph which can be analyzed to improve swapping. We propose SwapAdvisor, which performs joint optimization along 3 dimensions based on a given dataflow graph: operator scheduling, memory allocation, and swap decisions. SwapAdvisor explores the vast search space using a custom-designed genetic algorithm. Evaluations using a variety of large models show that SwapAdvisor can train models up to 12 times the GPU memory limit while achieving 53-99% of the throughput of a hypothetical baseline with infinite GPU memory.
CITATION STYLE
Huang, C. C., Jin, G., & Li, J. (2020). SwapAdvisor: Pushing deep learning beyond the GPU memory limit via smart swapping. In International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS (pp. 1341–1355). Association for Computing Machinery. https://doi.org/10.1145/3373376.3378530
Mendeley helps you to discover research relevant for your work.