Large Language Model as Unsupervised Health Information Retriever

2Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Retrieving health information is a task of search for health-related information from a variety of sources. Gathering self-reported health information may help enrich the knowledge body of the disease and its symptoms. We investigated retrieving symptom mentions in COVID-19-related Twitter posts with a pretrained large language model (GPT-3) without providing any examples (zero-shot learning). We introduced a new performance measure of total match (TM) to include exact, partial and semantic matches. Our results show that the zero-shot approach is a powerful method without the need to annotate any data, and it can assist in generating instances for few-shot learning which may achieve better performance.

Cited by Powered by Scopus

Développement d'un modèle de traitement automatique du langage pour calculer des indicateurs qualité du cancer du sein: une étude transversale multicentrique

1Citations
N/AReaders
Get full text

One size fits all: Enhanced zero-shot text classification for patient listening on social media

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Jiang, K., Mujtaba, M. M., & Bernard, G. R. (2023). Large Language Model as Unsupervised Health Information Retriever. In Studies in Health Technology and Informatics (Vol. 302, pp. 833–834). IOS Press BV. https://doi.org/10.3233/SHTI230282

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 5

71%

Professor / Associate Prof. 1

14%

Researcher 1

14%

Readers' Discipline

Tooltip

Computer Science 2

40%

Engineering 1

20%

Agricultural and Biological Sciences 1

20%

Medicine and Dentistry 1

20%

Save time finding and organizing research with Mendeley

Sign up for free