PaRaDe: Passage Ranking using Demonstrations with Large Language Models

ArXiv: 2310.14408
3Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Recent studies show that large language models (LLMs) can be instructed to effectively perform zero-shot passage re-ranking, in which the results of a first stage retrieval method, such as BM25, are rated and reordered to improve relevance. In this work, we improve LLM-based re-ranking by algorithmically selecting few-shot demonstrations to include in the prompt. Our analysis investigates the conditions where demonstrations are most helpful, and shows that adding even one demonstration is significantly beneficial. We propose a novel demonstration selection strategy based on difficulty rather than the commonly used semantic similarity. Furthermore, we find that demonstrations helpful for ranking are also effective at question generation. We hope our work will spur more principled research into question generation and passage ranking.

Cite

CITATION STYLE

APA

Drozdov, A., Zhuang, H., Dai, Z., Qin, Z., Rahimi, R., Wang, X., … Hui, K. (2023). PaRaDe: Passage Ranking using Demonstrations with Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 14242–14252). Association for Computational Linguistics (ACL).

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free