Are Neural Ranking Models Robust?

14Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Recently, we have witnessed the bloom of neural ranking models in the information retrieval (IR) field. So far, much effort has been devoted to developing effective neural ranking models that can generalize well on new data. There has been less attention paid to the robustness perspective. Unlike the effectiveness, which is about the average performance of a system under normal purpose, robustness cares more about the system performance in the worst case or under malicious operations instead. When a new technique enters into the real-world application, it is critical to know not only how it works in average, but also how would it behave in abnormal situations. So, we raise the question in this work: Are neural ranking models robust To answer this question, first, we need to clarify what we refer to when we talk about the robustness of ranking models in IR. We show that robustness is actually a multi-dimensional concept and there are three ways to define it in IR: (1) the performance variance under the independent and identically distributed (I.I.D.) setting; (2) the out-of-distribution (OOD) generalizability; and (3) the defensive ability against adversarial operations. The latter two definitions can be further specified into two different perspectives, respectively, leading to five robustness tasks in total. Based on this taxonomy, we build corresponding benchmark datasets, design empirical experiments, and systematically analyze the robustness of several representative neural ranking models against traditional probabilistic ranking models and learning-to-rank (LTR) models. The empirical results show that there is no simple answer to our question. While neural ranking models are less robust against other IR models in most cases, some of them can still win two out of five tasks. This is the first comprehensive study on the robustness of neural ranking models. We believe the way we study the robustness as well as our findings would be beneficial to the IR community. We will also release all the data and codes to facilitate the future research in this direction.

References Powered by Scopus

GloVe: Global vectors for word representation

26880Citations
N/AReaders
Get full text

Optimizing search engines using clickthrough data

3088Citations
N/AReaders
Get full text

The limitations of deep learning in adversarial settings

3073Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models

16Citations
N/AReaders
Get full text

Dealing with textual noise for robust and effective BERT re-ranking

11Citations
N/AReaders
Get full text

Black-box Adversarial Attacks against Dense Retrieval Models: A Multi-view Contrastive Learning Method

10Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Wu, C., Zhang, R., Guo, J., Fan, Y., & Cheng, X. (2022). Are Neural Ranking Models Robust? ACM Transactions on Information Systems, 41(2). https://doi.org/10.1145/3534928

Readers over time

‘21‘22‘23‘24036912

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 2

50%

Professor / Associate Prof. 1

25%

Researcher 1

25%

Readers' Discipline

Tooltip

Computer Science 4

67%

Business, Management and Accounting 2

33%

Save time finding and organizing research with Mendeley

Sign up for free
0