Exploiting Labeled and Unlabeled Data via Transformer Fine-tuning for Peer-Review Score Prediction

3Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Automatic peer-review aspect score prediction (PASP) of academic papers can be a helpful assistant tool for both reviewers and authors. Most existing works on PASP utilize supervised learning techniques. However, the limited number of peer-review data deteriorates the performance of PASP. This paper presents a novel semi-supervised learning (SSL) method that incorporates the Transformer fine-tuning into the Γ-model, a variant of the Ladder network, to leverage contextual features from unlabeled data. Backpropagation simultaneously minimizes the sum of supervised and unsupervised cost functions, it can be easily trained in an end-to-end fashion. The proposed method is evaluated on the PeerRead benchmark. The experimental results demonstrate that our model outperforms the supervised and naive semi-supervised learning baselines. Our source codes are available online.

Cite

CITATION STYLE

APA

Muangkammuen, P., Fukumoto, F., Li, J., & Suzuki, Y. (2022). Exploiting Labeled and Unlabeled Data via Transformer Fine-tuning for Peer-Review Score Prediction. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 2233–2240). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.164

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free