Abstract
Advances in how we build and use software, specifically the integration of machine learning for decision making, have led to widespread concern around model and software fairness. We present fairkit-learn, an interactive Python toolkit designed to support data scientists' ability to reason about and understand model fairness. We outline how fairkit-learn can support model training, evaluation, and comparison and describe the potential benefit that comes with using fairkit-learn in comparison to the state-of-the-art. Fairkit-learn is open source at https://go.gmu.edu/fairkit-learn/.
Author supplied keywords
Cite
CITATION STYLE
Johnson, B., & Brun, Y. (2022). Fairkit-learn: A Fairness Evaluation and Comparison Toolkit. In Proceedings - International Conference on Software Engineering (pp. 70–74). IEEE Computer Society. https://doi.org/10.1109/ICSE-Companion55297.2022.9793775
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.