Training Tips for the Transformer Model

  • Popel M
  • Bojar O
N/ACitations
Citations of this article
731Readers
Mendeley users who have this article in their library.

Abstract

This article describes our experiments in neural machine translation using the recent Tensor2Tensor framework and the Transformer sequence-to-sequence model (Vaswani et al., 2017). We examine some of the critical parameters that affect the final translation quality, memory usage, training stability and training time, concluding each experiment with a set of recommendations for fellow researchers. In addition to confirming the general mantra “more data and larger models”, we address scaling to multiple GPUs and provide practical tips for improved training regarding batch size, learning rate, warmup steps, maximum sentence length and checkpoint averaging. We hope that our observations will allow others to get better results given their particular hardware and data constraints.

Cite

CITATION STYLE

APA

Popel, M., & Bojar, O. (2018). Training Tips for the Transformer Model. The Prague Bulletin of Mathematical Linguistics, 110(1), 43–70. https://doi.org/10.2478/pralin-2018-0002

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free