BERT meets cranfield: Uncovering the properties of full ranking on fully labeled data

3Citations
Citations of this article
55Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently, various information retrieval models have been proposed based on pre-trained BERT models, achieving outstanding performance. The majority of such models have been tested on data collections with partial relevance labels, where various potentially relevant documents have not been exposed to the annotators. Therefore, evaluating BERT-based rankers may lead to biased and unfair evaluation results, simply because a relevant document has not been exposed to the annotators while creating the collection. In our work, we aim to better understand a BERT-based ranker's strengths compared to a BERT-based re-ranker and the initial ranker. To this aim, we investigate BERT-based rankers performance on the Cranfield collection, which comes with full relevance judgment on all documents in the collection. Our results demonstrate the BERT-based full ranker's effectiveness, as opposed to the BERT-based re-ranker and BM25. Also, analysis shows that there are documents that the BERT-based full-ranker finds that were not found by the initial ranker.

Cite

CITATION STYLE

APA

Ghasemi, N., & Hiemstra, D. (2021). BERT meets cranfield: Uncovering the properties of full ranking on fully labeled data. In EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Student Research Workshop (pp. 58–64). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.eacl-srw.9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free