Evaluating large language models on multimodal chemistry olympiad exams

  • Cui Y
  • Yao X
  • Qin Y
  • et al.
N/ACitations
Citations of this article
19Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Multimodal scientific reasoning remains a significant challenge for large language models (LLMs), particularly in chemistry, where problem-solving relies on symbolic diagrams, molecular structures, and structured visual data. Here, we systematically evaluate 40 proprietary and open-source multimodal LLMs, including GPT-5, o3, Gemini-2.5-Pro, and Qwen2.5-VL, on a curated benchmark of Olympiad-style chemistry questions drawn from over two decades of U.S. National Chemistry Olympiad (USNCO) exams. These questions require integrated visual and textual reasoning across diverse modalities. We find that many models struggle with modality fusion, where, in some cases, removing the image even improves accuracy, indicating misalignment in vision-language integration. Chain-of-Thought prompting consistently enhances both accuracy and visual grounding, as demonstrated through ablation studies and occlusion-based interpretability. Our results reveal critical limitations in the scientific reasoning abilities of current MLLMs, providing actionable strategies for developing more robust and interpretable multimodal systems in chemistry. This work provides a timely benchmark for measuring progress in domain-specific multimodal AI and underscores the need for further advances at the intersection of artificial intelligence and scientific reasoning.

Cite

CITATION STYLE

APA

Cui, Y., Yao, X., Qin, Y., Li, X., Wang, S., & Hu, G. (2025). Evaluating large language models on multimodal chemistry olympiad exams. Communications Chemistry, 8(1), 402. https://doi.org/10.1038/s42004-025-01782-x

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free