Semi-automating abstract screening with a natural language model pretrained on biomedical literature

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We demonstrate the performance and workload impact of incorporating a natural language model, pretrained on citations of biomedical literature, on a workflow of abstract screening for studies on prognostic factors in end-stage lung disease. The model was optimized on one-third of the abstracts, and model performance on the remaining abstracts was reported. Performance of the model, in terms of sensitivity, precision, F1 and inter-rater agreement, was moderate in comparison with other published models. However, incorporating it into the screening workflow, with the second reviewer screening only abstracts with conflicting decisions, translated into a 65% reduction in the number of abstracts screened by the second reviewer. Subsequent work will look at incorporating the pre-trained BERT model into screening workflows for other studies prospectively, as well as improving model performance.

References Powered by Scopus

Toward systematic review automation: A practical guide to using machine learning tools in research synthesis

305Citations
N/AReaders
Get full text

Using artificial intelligence methods for systematic review in health sciences: A systematic review

66Citations
N/AReaders
Get full text

Performance and usability of machine learning for screening in systematic reviews: A comparative evaluation of three tools

66Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Ng, S. H. X., Teow, K. L., Ang, G. Y., Tan, W. S., & Hum, A. (2023, December 1). Semi-automating abstract screening with a natural language model pretrained on biomedical literature. Systematic Reviews. BioMed Central Ltd. https://doi.org/10.1186/s13643-023-02353-8

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 3

43%

Researcher 2

29%

Professor / Associate Prof. 1

14%

Lecturer / Post doc 1

14%

Readers' Discipline

Tooltip

Nursing and Health Professions 2

33%

Engineering 2

33%

Medicine and Dentistry 1

17%

Social Sciences 1

17%

Article Metrics

Tooltip
Mentions
News Mentions: 1

Save time finding and organizing research with Mendeley

Sign up for free