Explainability in Mechanism Design: Recent Advances and the Road Ahead

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Designing and implementing explainable systems is seen as the next step towards increasing user trust in, acceptance of and reliance on Artificial Intelligence (AI) systems. While explaining choices made by black-box algorithms such as machine learning and deep learning has occupied most of the limelight, systems that attempt to explain decisions (even simple ones) in the context of social choice are steadily catching up. In this paper, we provide a comprehensive survey of explainability in mechanism design, a domain characterized by economically motivated agents and often having no single choice that maximizes all individual utility functions. We discuss the main properties and goals of explainability in mechanism design, distinguishing them from those of Explainable AI in general. This discussion is followed by a thorough review of the challenges one may face when working on Explainable Mechanism Design and propose a few solution concepts to those.

Cite

CITATION STYLE

APA

Suryanarayana, S. A., Sarne, D., & Kraus, S. (2022). Explainability in Mechanism Design: Recent Advances and the Road Ahead. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13442 LNAI, pp. 364–382). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-20614-6_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free