The emergence of machine learning as a society-changing technology in the past decade has triggered concerns about people's inability to understand the reasoning of increasingly complex models. The field of IML (interpretable machine learning) grew out of these concerns, with the goal of empowering various stakeholders to tackle use cases, such as building trust in models, performing model debugging, and generally informing real human decision-making.
CITATION STYLE
Chen, V., Li, J., Kim, J. S., Plumb, G., & Talwalkar, A. (2021). Interpretable Machine Learning. Queue, 19(6), 28–56. https://doi.org/10.1145/3511299
Mendeley helps you to discover research relevant for your work.