Automatic peer-review aspect score prediction (PASP) of academic papers can be a helpful assistant tool for both reviewers and authors. Most existing works on PASP utilize supervised learning techniques. However, the limited number of peer-review data deteriorates the performance of PASP. This paper presents a novel semi-supervised learning (SSL) method that incorporates the Transformer fine-tuning into the Γ-model, a variant of the Ladder network, to leverage contextual features from unlabeled data. Backpropagation simultaneously minimizes the sum of supervised and unsupervised cost functions, it can be easily trained in an end-to-end fashion. The proposed method is evaluated on the PeerRead benchmark. The experimental results demonstrate that our model outperforms the supervised and naive semi-supervised learning baselines. Our source codes are available online.
CITATION STYLE
Muangkammuen, P., Fukumoto, F., Li, J., & Suzuki, Y. (2022). Exploiting Labeled and Unlabeled Data via Transformer Fine-tuning for Peer-Review Score Prediction. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 2233–2240). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.164
Mendeley helps you to discover research relevant for your work.