Predictive models are increasingly used to make various consequential decisions in high-stakes domains such as healthcare, finance, and policy. It becomes critical to ensure that these models make accurate predictions, are robust to shifts in the data, do not rely on spurious features, and do not unduly discriminate against minority groups. To this end, several approaches spanning various areas such as explain ability, fairness, and robustness have been proposed in recent literature. Such approaches need to be human-centered as they cater to the understanding of the models to their users. However, there is little to no research on understanding the needs and challenges in monitoring deployed machine learning (ML) models from a human-centric perspective. To address this gap, we conducted semi-structured interviews with 13 practitioners who are experienced with deploying ML models and engaging with customers spanning domains such as financial services, healthcare, hiring, online retail, computational advertising, and conversational assistants. We identified various shuman-centric challenges and requirements for model monitoring in real-world applications. Specifically, we found that relevant stakeholders would want model monitoring systems to provide clear, unambiguous, and easy-to-understand insights that are readily actionable. Furthermore, our study also revealed that stakeholders desire customization of model monitoring systems to cater to domain-specific use cases.
CITATION STYLE
Shergadwala, M. N., Lakkaraju, H., & Kenthapadi, K. (2022). A Human-Centric Perspective on Model Monitoring. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (Vol. 10, pp. 173–183). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/hcomp.v10i1.21997
Mendeley helps you to discover research relevant for your work.