Abstract
Objectives: We aimed to demonstrate the importance of establishing best practices in large language model research, using repeat prompting as an illustrative example. Materials and Methods: Using data from a prior study investigating potential model bias in peer review of medical abstracts, we compared methods that ignore correlation in model outputs from repeated prompting with a random effects method that accounts for this correlation. Results: High correlation within groups was found when repeatedly prompting the model, with intraclass correlation coefficient of 0.69. Ignoring the inherent correlation in the data led to over 100-fold inflation of effective sample size. After appropriately accounting for this issue, the authors’ results reverse from a small but highly significant finding to no evidence of model bias. Discussion: The establishment of best practices for LLM research is urgently needed, as demonstrated in this case where accounting for repeat prompting in analyses was critical for accurate study conclusions.
Author supplied keywords
Cite
CITATION STYLE
Gallo, R. J., Baiocchi, M., Savage, T. R., & Chen, J. H. (2025). Establishing best practices in large language model research: an application to repeat prompting. Journal of the American Medical Informatics Association, 32(2), 386–390. https://doi.org/10.1093/jamia/ocae294
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.