Benchmarking a transformer-FREE model for ad-hoc retrieval

1Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.

Abstract

Transformer-based “behemoths” have grown in popularity, as well as structurally, shattering multiple NLP benchmarks along the way. However, their real-world usability remains a question. In this work, we empirically assess the feasibility of applying transformer-based models in real-world ad-hoc retrieval applications by comparison to a “greener and more sustainable” alternative, comprising only 620 trainable parameters. We present an analysis of their efficacy and efficiency and show that considering limited computational resources, the lighter model running on the CPU achieves a 3 to 20 times speedup in training and 7 to 47 times in inference while maintaining a comparable retrieval performance. Code to reproduce the efficiency experiments is available on https://github.com/bioinformatics-ua/ EACL2021-reproducibility/.

Cite

CITATION STYLE

APA

Almeida, T., & Matos, S. (2021). Benchmarking a transformer-FREE model for ad-hoc retrieval. In EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 3343–3353). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.eacl-main.293

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free