The Hidden Rules of Hanabi: How Humans Outperform AI Agents

2Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Games that feature multiple players, limited communication, and partial information are particularly challenging for AI agents. In the cooperative card game Hanabi, which possesses all of these attributes, AI agents fail to achieve scores comparable to even first-time human players. Through an observational study of three mixed-skill Hanabi play groups, we identify the techniques used by humans that help to explain their superior performance compared to AI. These concern physical artefact manipulation, coordination play, role establishment, and continual rule negotiation. Our findings extend previous accounts of human performance in Hanabi, which are purely in terms of theory-of-mind reasoning, by revealing more precisely how this form of collective decision-making is enacted in skilled human play. Our interpretation points to a gap in the current capabilities of AI agents to perform cooperative tasks.

References Powered by Scopus

Mastering the game of Go without human knowledge

7128Citations
N/AReaders
Get full text

Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

3728Citations
N/AReaders
Get full text

Team mental models in a team knowledge framework: Expanding theory and measurement across disciplinary boundaries

645Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Human-AI Collaboration in Cooperative Games: A Study of Playing Codenames with an LLM Assistant

1Citations
N/AReaders
Get full text

More than Task Performance: Developing New Criteria for Successful Human-AI Teaming Using the Cooperative Card Game Hanabi

1Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Sidji, M., Smith, W., & Rogerson, M. J. (2023). The Hidden Rules of Hanabi: How Humans Outperform AI Agents. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3544548.3581550

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 4

44%

Professor / Associate Prof. 2

22%

Researcher 2

22%

Lecturer / Post doc 1

11%

Readers' Discipline

Tooltip

Computer Science 7

78%

Engineering 1

11%

Social Sciences 1

11%

Save time finding and organizing research with Mendeley

Sign up for free