AI for Social Good (AI4SG) has been advocated as a way to address social impact problems using emerging technologies, but little research has examined practitioner motivations behind building these tools and how practitioners make such tools understandable to stakeholders and end users, e.g., through leveraging techniques such as explainable AI (XAI). In this study, we interviewed 12 AI4SG practitioners to understand their experiences developing social impact technologies and their perceptions of XAI, focusing on projects in the Global South. While most of our participants were aware of XAI, many did not incorporate these techniques due to a lack of domain expertise, difficulty incorporating XAI into their existing workflows, and perceiving XAI as less valuable for end users with low levels of AI and digital literacy. Our work reflects on the shortcomings of XAI for real-world use and advocates for a reimagined agenda for human-centered explainability research.
CITATION STYLE
Okolo, C. T., & Lin, H. (2024). “You can’t build what you don’t understand”: Practitioner Perspectives on Explainable AI in the Global South. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3613905.3651080
Mendeley helps you to discover research relevant for your work.