Exploring Variation of Results from Different Experimental Conditions

0Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

It might reasonably be expected that running multiple experiments for the same task using the same data and model would yield very similar results. Recent research has, however, shown this not to be the case for many NLP experiments. In this paper, we report extensive coordinated work by two NLP groups to run the training and testing pipeline for three neural text simplification models under varying experimental conditions, including different random seeds, run-time environments, and dependency versions, yielding a large number of results for each of the three models using the same data and train/dev/test set splits. From one perspective, these results can be interpreted as shedding light on the reproducibility of evaluation results for the three NTS models, and we present an in-depth analysis of the variation observed for different combinations of experimental conditions. From another perspective, the results raise the question of whether the averaged score should be considered the 'true' result for each model.

Cite

CITATION STYLE

APA

Popović, M., Arvan, M., Parde, N., & Belz, A. (2023). Exploring Variation of Results from Different Experimental Conditions. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 2746–2757). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.172

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free