Inefficiency of K-FAC for large batch size training

8Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

Abstract

There have been several recent work claiming record times for ImageNet training. This is achieved by using large batch sizes during training to leverage parallel resources to produce faster wall-clock training times per training epoch. However, often these solutions require massive hyper-parameter tuning, which is an important cost that is often ignored. In this work, we perform an extensive analysis of large batch size training for two popular methods that is Stochastic Gradient Descent (SGD) as well as Kronecker-Factored Approximate Curvature (K-FAC) method. We evaluate the performance of these methods in terms of both wall-clock time and aggregate computational cost, and study the hyper-parameter sensitivity by performing more than 512 experiments per batch size for each of these methods. We perform experiments on multiple different models on two datasets of CIFAR-10 and SVHN. The results show that beyond a critical batch size both K-FAC and SGD significantly deviate from ideal strong scaling behaviour, and that despite common belief K-FAC does not exhibit improved large-batch scalability behavior, as compared to SGD.

Cite

CITATION STYLE

APA

Ma, L., Montague, G., Ye, J., Yao, Z., Gholami, A., Keutzer, K., & Mahoney, M. W. (2020). Inefficiency of K-FAC for large batch size training. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 5053–5060). AAAI press. https://doi.org/10.1609/aaai.v34i04.5946

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free