We explore few-shot learning (FSL) for relation classification (RC). Focusing on the realistic scenario of FSL, in which a test instance might not belong to any of the target categories (none-of-the-above, [NOTA]), we first revisit the recent popular dataset structure for FSL, pointing out its unrealistic data distribution. To remedy this, we propose a novel methodology for deriving more realistic few-shot test data from available datasets for supervised RC, and apply it to the TACRED dataset. This yields a new challenging benchmark for FSL-RC, on which state of the art models show poor performance. Next, we analyze classification schemes within the popular embedding-based nearest-neighbor approach for FSL, with respect to constraints they impose on the embedding space. Triggered by this analysis, we propose a novel classification scheme in which the NOTA category is represented as learned vectors, shown empirically to be an appealing option for FSL.
CITATION STYLE
Sabo, O., Elazar, Y., Goldberg, Y., & Dagan, I. (2021). Revisiting few-shot relation classification: Evaluation data and classification schemes. Transactions of the Association for Computational Linguistics, 9, 691–706. https://doi.org/10.1162/tacl_a_00392
Mendeley helps you to discover research relevant for your work.