This paper describes the winning system in the End-to-end Pipeline phase for the NLPContributionGraph task. The system is composed of three BERT-based models and the three models are used to extract sentences, phrases and triples respectively. Experiments show that sampling and adversarial training can greatly boost the system. In End-to-end Pipeline phase, our system got an average F1 of 0.4703, significantly higher than the second-placed system which got an average F1 of 0.3828.
CITATION STYLE
Zhang, G., Su, Y., He, C., Lin, L., Sun, C., & Shan, L. (2021). ITNLP at SemEval-2021 Task 11: Boosting BERT with Sampling and Adversarial Training for Knowledge Extraction. In SemEval 2021 - 15th International Workshop on Semantic Evaluation, Proceedings of the Workshop (pp. 485–489). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.semeval-1.59
Mendeley helps you to discover research relevant for your work.