NUIG-DSI’s submission to The GEM Benchmark 2021

0Citations
Citations of this article
43Readers
Mendeley users who have this article in their library.

Abstract

This paper describes the submission by NUIG-DSI to the GEM benchmark 2021. We participate in the modeling shared task where we submit outputs on four datasets for data-to-text generation, namely, DART, WebNLG (en), E2E and CommonGen. We follow an approach similar to the one described in the GEM benchmark paper where we use the pre-trained T5-base model for our submission. We train this model on additional monolingual data where we experiment with different masking strategies specifically focused on masking entities, predicates and concepts as well as a random masking strategy for pre-training. In our results we find that random masking performs the best in terms of automatic evaluation metrics, though the results are not statistically significantly different compared to other masking strategies.

Cite

CITATION STYLE

APA

Pasricha, N., Arcan, M., & Buitelaar, P. (2021). NUIG-DSI’s submission to The GEM Benchmark 2021. In GEM 2021 - 1st Workshop on Natural Language Generation, Evaluation, and Metrics, Proceedings (pp. 148–154). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.gem-1.13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free