Algorithmic decision-making systems are successfully being adopted in a wide range of domains for diverse tasks. While the potential benefits of algorithmic decision-making are many, the importance of trusting these systems has only recently attracted attention. There is growing concern that these systems are complex, opaque and non-intuitive, and hence are difficult to trust. There has been a recent resurgence of interest in explainable artificial intelligence (XAI) that aims to reduce the opacity of a model by explaining its behavior, its predictions or both, thus allowing humans to scrutinize and trust the model. A host of technical advances have been made and several explanation methods have been proposed in recent years that address the problem of model explainability and transparency. In this tutorial, we will present these novel explanation approaches, characterize their strengths and limitations, position existing work with respect to the database (DB) community, and enumerate opportunities for data management research in the context of XAI.
CITATION STYLE
Pradhan, R., Lahiri, A., Galhotra, S., & Salimi, B. (2022). Explainable AI: Foundations, Applications, Opportunities for Data Management Research. In Proceedings of the ACM SIGMOD International Conference on Management of Data (pp. 2452–2457). Association for Computing Machinery. https://doi.org/10.1145/3514221.3522564
Mendeley helps you to discover research relevant for your work.