Analyzing Transferable Knowledge by Pretraining with Artificial Language

  • Ri R
  • Tsuruoka Y
N/ACitations
Citations of this article
3Readers
Mendeley users who have this article in their library.

Abstract

We conducted a study to determine what kind of structural knowledge learned in neural network encoders is transferable to the processing of natural language. We designed artificial languages with structural properties that mimic those of natural language, pretrained encoders on the data, and examined the encoders' effects on downstream tasks in natural language. Our experimental results demonstrate the importance of statistical dependency, as well as the effectiveness of the nesting structure in implicit dependency relations. These results indicate that position-aware context dependence represents knowledge transferable across different languages. 1 † , The University of Tokyo † † 2023 4 LINE , LINE Corporation ACL 2022 (Ri and Tsuruoka 2022) (C) The Association for Natural Language Processing. Licensed under CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/).

Cite

CITATION STYLE

APA

Ri, R., & Tsuruoka, Y. (2023). Analyzing Transferable Knowledge by Pretraining with Artificial Language. Journal of Natural Language Processing, 30(2), 664–688. https://doi.org/10.5715/jnlp.30.664

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free