Learning-to-rank and relevance feedback for literature appraisal in empirical medicine

8Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The constantly expanding medical libraries contain immense amounts of information, including evidence from healthcare research. Gathering and interpreting this evidence can be both challenging and time-consuming for researchers conducting systematic reviews. Technologically assisted review (TAR) aims to assist this process by finding as much relevant information as possible with the least effort. Toward this, we present an incremental learning method that ranks documents, previously retrieved, by automating the process of title and abstract screening. Our approach combines a learning-to-rank model trained across multiple reviews with a model focused on the given review, incrementally trained based on relevance feedback. The classifiers use as features several similarity metrics between the documents and the research topic, such as Levenshtein distance, cosine similarity and BM25, and vectors derived from word embedding methods such as Word2Vec and Doc2Vec. We test our approach using the dataset provided by the Task II of CLEF eHealth 2017 and we empirically compare it with other approaches participated in the task.

Cite

CITATION STYLE

APA

Lagopoulos, A., Anagnostou, A., Minas, A., & Tsoumakas, G. (2018). Learning-to-rank and relevance feedback for literature appraisal in empirical medicine. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11018 LNCS, pp. 52–63). Springer Verlag. https://doi.org/10.1007/978-3-319-98932-7_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free