Efficiency through auto-sizing: Notre dame NLP’s submission to the WNGT 2019 efficiency task

1Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.

Abstract

This paper describes the Notre Dame Natural Language Processing Group’s (NDNLP) submission to the WNGT 2019 shared task (Hayashi et al., 2019). We investigated the impact of auto-sizing (Murray and Chiang, 2015; Murray et al., 2019) to the Transformer network (Vaswani et al., 2017) with the goal of substantially reducing the number of parameters in the model. Our method was able to eliminate more than 25% of the model’s parameters while suffering a decrease of only 1.1 BLEU.

Cite

CITATION STYLE

APA

Murray, K., DuSell, B., & Chiang, D. (2019). Efficiency through auto-sizing: Notre dame NLP’s submission to the WNGT 2019 efficiency task. In EMNLP-IJCNLP 2019 - Proceedings of the 3rd Workshop on Neural Generation and Translation (pp. 297–301). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d19-5634

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free