Large language models (LLMs) have shown promise in reducing time, costs, and errors associated with manual data extraction. A recent study demonstrated that LLMs outperformed natural language processing approaches in abstracting pathology report information. However, challenges include the risks of weakening critical thinking, propagating biases, and hallucinations, which may undermine the scientific method and disseminate inaccurate information. Incorporating suitable guidelines (e.g., CANGARU), should be encouraged to ensure responsible LLM use.
CITATION STYLE
Kwong, J. C. C., Wang, S. C. Y., Nickel, G. C., Cacciamani, G. E., & Kvedar, J. C. (2024, December 1). The long but necessary road to responsible use of large language models in healthcare research. Npj Digital Medicine. Nature Research. https://doi.org/10.1038/s41746-024-01180-y
Mendeley helps you to discover research relevant for your work.