Although distant supervision automatically generates training data for relation extraction, it also introduces false-positive (FP) and false-negative (FN) training instances to the generated datasets. While both types of errors degrade the final model performance, previous work on distant supervision denoising focuses more on suppressing FP noise and less on resolving the FN problem. We here propose H-FND, a hierarchical false-negative denoising framework for robust distant supervision relation extraction, as an FN denoising solution. H-FND uses a hierarchical policy which first determines whether non-relation (NA) instances should be kept, discarded, or revised during the training process. For those learning instances which are to be revised, the policy further reassigns them appropriate relations, making them better training inputs. Experiments on SemEval-2010 and TACRED were conducted with controlled FN ratios that randomly turn the relations of training and validation instances into negatives to generate FN instances. In this setting, H-FND can revise FN instances correctly and maintains high F1 scores even when 50% of the instances have been turned into negatives. Experiments on NYT10 is further conducted to show that H-FND is applicable in a realistic setting.
CITATION STYLE
Chen, J. W., Fu, T. J., Lee, C. K., & Ma, W. Y. (2021). H-FND: Hierarchical False-Negative Denoising for Distant Supervision Relation Extraction. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 2579–2593). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.228
Mendeley helps you to discover research relevant for your work.