We introduce a method that self trains (or bootstraps) neural relation and explanation classifiers. Our work expands the supervised approach of (Tang and Surdeanu, 2022), which jointly trains a relation classifier with an explanation classifier that identifies context words important for the relation at hand, to semi-supervised scenarios. In particular, our approach iteratively converts the explainable models’ outputs to rules and applies them to unlabeled text to produce new annotations. Our evaluation on the TACRED dataset shows that our method outperforms the rule-based model we started from by 15 F1 points, outperforms traditional self-training that relies just on the relation classifier by 5 F1 points, and performs comparatively with the prompt-based approach of Sainz et al. (2021) (without requiring an additional natural language inference component).
CITATION STYLE
Tang, Z., & Surdeanu, M. (2023). Bootstrapping Neural Relation and Explanation Classifiers. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 48–56). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-short.5
Mendeley helps you to discover research relevant for your work.