A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize Mixture-of-Experts Training

7Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Mixture-of-Experts (MoE) is a neural network architecture that adds sparsely activated expert blocks to a base model, increasing the number of parameters without impacting computational costs. However, current distributed deep learning frameworks are limited in their ability to train high-quality MoE models with large base models. In this work, we present DeepSpeed-TED, a novel, three-dimensional, hybrid parallel algorithm that combines data, tensor, and expert parallelism to enable the training of MoE models with 4 - 8× larger base models than the current state-of-the-art. We also describe memory optimizations in the optimizer step, and communication optimizations that eliminate unnecessary data movement. We implement our approach in DeepSpeed and achieve speedups of 26% over a baseline (i.e. without our communication optimizations) when training a 40 billion parameter MoE model (6.7 billion base model with 16 experts) on 128 V100 GPUs.

Cite

CITATION STYLE

APA

Singh, S., Ruwase, O., Awan, A. A., Rajbhandari, S., He, Y., & Bhatele, A. (2023). A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize Mixture-of-Experts Training. In Proceedings of the International Conference on Supercomputing (pp. 203–214). Association for Computing Machinery. https://doi.org/10.1145/3577193.3593704

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free