Universal precautions required; Artificial intelligence takes on the Australian Medical Council’s trial examination

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Background and objective The potential of artificial intelligence in medical practice is increasingly being investigated. This study aimed to examine OpenAI’s ChatGPT in answering medical multiple choice questions (MCQ) in an Australian context. Methods We provided MCQs from the Australian Medical Council’s (AMC) medical licencing practice examination to ChatGPT. The chatbot’s responses were graded using AMC’s online portal. This experiment was repeated twice. Results ChatGPT was moderately accurate in answering the questions, achieving a score of 29/50. It was able to generate answer explanations to most questions (45/50). The chatbot was moderately consistent, providing the same overall answer to 40 of the 50 questions between trial runs. Discussion The moderate accuracy of ChatGPT demonstrates potential risks for both patients and physicians using this tool. Further research is required to create more accurate models and to critically appraise such models.

Cite

CITATION STYLE

APA

Kleinig, O., Kovoor, J. G., Gupta, A. K., & Bacchi, S. (2023). Universal precautions required; Artificial intelligence takes on the Australian Medical Council’s trial examination. Australian Journal of General Practice, 52(12), 863–865. https://doi.org/10.31128/AJGP-02-23-6708

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free