Too Much Information: Keeping Training Simple for BabyLMs

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

This paper details the work of the University of Groningen for the BabyLM Challenge (Warstadt et al., 2023). We follow the idea that, like babies, language models should be introduced to simpler concepts first and build off of that knowledge to understand more complex concepts. We examine this strategy of simple-then-complex through a variety of lenses, namely context size, vocabulary, and overall linguistic complexity of the data. We find that only one, context size, is truly beneficial to training a language model. However this simple change to context size gives us improvements of 2 points on average on (Super)GLUE tasks, 1 point on MSGS tasks, and 12% on average on BLiMP tasks. Our context-limited model outperforms the baseline that was trained on 10× the amount of data.

Cite

CITATION STYLE

APA

Edman, L., & Bylinina, L. (2023). Too Much Information: Keeping Training Simple for BabyLMs. In CoNLL 2023 - BabyLM Challenge at the 27th Conference on Computational Natural Language Learning, Proceedings (pp. 89–97). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.conll-babylm.8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free