A Case Study of Integrating Fairness Visualization Tools in Machine Learning Education

19Citations
Citations of this article
47Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As demonstrated by media attention and research, Artificial Intelligence systems are not adequately addressing issues of fairness and bias, and more education on these topics is needed in industry and higher education. Currently, computer science courses that cover AI fairness and bias focus on statistical analysis or, on the other hand, attempt to bring in philosophical perspectives that lack actionable takeaways for students. Based on long-standing pedagogical research demonstrating the importance of using tools and visualizations to reinforce student learning, this case study reports on the impacts of using publicly-available visualization tools used in HCI practice as a resource for students examining algorithmic fairness concepts. Through qualitative review and observations of four focus groups, we examined six open-source fairness tools that enable students to visualize, quantify and explore algorithmic biases. The findings of this study provide insights into the benefits, challenges, and opportunities of integrating fairness tools as part of machine learning education.

Author supplied keywords

Cite

CITATION STYLE

APA

Mashhadi, A., Zolyomi, A., & Quedado, J. (2022). A Case Study of Integrating Fairness Visualization Tools in Machine Learning Education. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3491101.3503568

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free