Document-level relation extraction (RE) aims at extracting relations among entities expressed across multiple sentences, which can be viewed as a multi-label classification problem. In a typical document, most entity pairs do not express any pre-defined relation and are labeled as “none” or “no relation”. For good document-level RE performance, it is crucial to distinguish such none class instances (entity pairs) from those of predefined classes (relations). However, most existing methods only estimate the probability of predefined relations independently without considering the probability of “no relation”. This ignores the context of entity pairs and the label correlations between the none class and pre-defined classes, leading to sub-optimal predictions. To address this problem, we propose a new multi-label loss that encourages large margins of label confidence scores between each pre-defined class and the none class, which enables captured label correlations and context-dependent thresholding for label prediction. To gain further robustness against positive-negative imbalance and mislabeled data that could appear in real-world RE datasets, we propose a margin regularization and a margin shifting technique. Experimental results demonstrate that our method significantly outperforms existing multi-label losses for document-level RE and works well in other multi-label tasks such as emotion classification when none class instances are available for training.
CITATION STYLE
Zhou, Y., & Lee, W. S. (2022). None Class Ranking Loss for Document-Level Relation Extraction. In IJCAI International Joint Conference on Artificial Intelligence (pp. 4538–4544). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/630
Mendeley helps you to discover research relevant for your work.