Abstract
We conducted a study to determine what kind of structural knowledge learned in neural network encoders is transferable to the processing of natural language. We designed artificial languages with structural properties that mimic those of natural language, pretrained encoders on the data, and examined the encoders' effects on downstream tasks in natural language. Our experimental results demonstrate the importance of statistical dependency, as well as the effectiveness of the nesting structure in implicit dependency relations. These results indicate that position-aware context dependence represents knowledge transferable across different languages. 1 † , The University of Tokyo † † 2023 4 LINE , LINE Corporation ACL 2022 (Ri and Tsuruoka 2022) (C) The Association for Natural Language Processing. Licensed under CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/).
Cite
CITATION STYLE
Ri, R., & Tsuruoka, Y. (2023). Analyzing Transferable Knowledge by Pretraining with Artificial Language. Journal of Natural Language Processing, 30(2), 664–688. https://doi.org/10.5715/jnlp.30.664
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.