Impact of Sample Selection on In-Context Learning for Entity Extraction from Scientific Writing

12Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Prompt-based usage of Large Language Models (LLMs) is an increasingly popular way to tackle many well-known natural language problems. This trend is due, in part, to the appeal of the In-Context Learning (ICL) prompt set-up, in which a few selected training examples are provided along with the inference request. ICL, a type of few-shot learning, is especially attractive for natural language processing (NLP) tasks defined for specialised domains, such as entity extraction from scientific documents, where the annotation is very costly due to expertise requirements for the annotators. In this paper, we present a comprehensive analysis of in-context sample selection methods for entity extraction from scientific documents using GPT-3.5 and compare these results against a fully supervised transformer-based baseline. Our results indicate that the effectiveness of the in-context sample selection methods is heavily domain-dependent, but the improvements are more notable for problems with a larger number of entity types. More in-depth analysis shows that ICL is more effective for low-resource setups of scientific information extraction.

Cite

CITATION STYLE

APA

Bölücü, N., Rybinski, M., & Wan, S. (2023). Impact of Sample Selection on In-Context Learning for Entity Extraction from Scientific Writing. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 5090–5107). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.338

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free