Domain Adaptation of Transformers for English Word Segmentation

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Word segmentation can contribute to improve the results of natural language processing tasks on several problem domains, including social media sentiment analysis, source code summarization and neural machine translation. Taking the English language as a case study, we fine-tune a Transformer architecture which has been trained through the Pre-trained Distillation (PD) algorithm, while comparing it to previous experiments with recurrent neural networks. We organize datasets and resources from multiple application domains under a unified format, and demonstrate that our proposed architecture has competitive performance and superior cross-domain generalization in comparison with previous approaches for word segmentation in Western languages.

Cite

CITATION STYLE

APA

Rodrigues, R. C., Rocha, A. S., Inuzuka, M. A., & do Nascimento, H. A. D. (2020). Domain Adaptation of Transformers for English Word Segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12319 LNAI, pp. 483–496). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-61377-8_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free