RG PA at SemEval-2021 Task 1: A Contextual Attention-based Model with RoBERTa for Lexical Complexity Prediction

7Citations
Citations of this article
47Readers
Mendeley users who have this article in their library.

Abstract

In this paper we propose a contextual attention-based model with two-stage fine-tune training using RoBERTa. First, we perform the first-stage fine-tune on corpus with RoBERTa, so that the model can learn some prior domain knowledge. Then we get the contextual embedding of context words based on the token-level embedding with the fine-tuned model. And we use Kfold cross-validation to get K models and ensemble them to get the final result. Finally, we attain the 2nd place in the final evaluation phase of sub-task 2 with pear-son correlation of 0.8575.

Cite

CITATION STYLE

APA

Rao, G., Li, M., Hou, X., Jiang, L., Mo, Y., & Shen, J. (2021). RG PA at SemEval-2021 Task 1: A Contextual Attention-based Model with RoBERTa for Lexical Complexity Prediction. In SemEval 2021 - 15th International Workshop on Semantic Evaluation, Proceedings of the Workshop (pp. 623–626). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.semeval-1.79

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free