Improving Multitask Retrieval by Promoting Task Specialization

1Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

In multitask retrieval, a single retriever is trained to retrieve relevant contexts for multiple tasks. Despite its practical appeal, naive multitask retrieval lags behind task-specific retrieval, in which a separate retriever is trained for each task. We show that it is possible to train a multitask retriever that outperforms task-specific retrievers by promoting task specialization. The main ingredients are: (1) a better choice of pretrained model—one that is explicitly optimized for multitasking—along with compatible prompting, and (2) a novel adaptive learning method that encourages each parameter to specialize in a particular task. The resulting multitask retriever is highly performant on the KILT benchmark. Upon analysis, we find that the model indeed learns parameters that are more task-specialized compared to naive multitasking without prompting or adaptive learning.1.

Cite

CITATION STYLE

APA

Zhang, W., Xiong, C., Stratos, K., & Overwijk, A. (2023). Improving Multitask Retrieval by Promoting Task Specialization. Transactions of the Association for Computational Linguistics, 11, 1201–1212. https://doi.org/10.1162/tacl_a_00597

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free