Privacy-Preserving and Trustless Verifiable Fairness Audit of Machine Learning Models

2Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

In the big data era, machine learning has developed prominently and is widely used in real-world systems. Yet, machine learning raises fairness concerns, which incurs discrimination against groups determined by sensitive attributes such as gender and race. Many researchers have focused on developing fairness audit technique of machine learning model that enable users to protect themselves from discrimination. Existing solutions, however, rely on additional external trust assumptions, either on third-party entities or external components, that significantly lower the security. In this study, we propose a trustless verifiable fairness audit framework that assesses the fairness of ML algorithms while addressing potential security issues such as data privacy, model secrecy, and trustworthiness. With succinctness and non-interactive of zero knowledge proof, our framework not only guarantees audit integrity, but also clearly enhance security, enabling fair ML models to be publicly auditable and any client to verify audit results without extra trust assumption. Our evaluation on various machine learning models and real-world datasets shows that our framework achieves practical performance.

Cite

CITATION STYLE

APA

Tang, G., Tan, W., & Cai, M. (2023). Privacy-Preserving and Trustless Verifiable Fairness Audit of Machine Learning Models. International Journal of Advanced Computer Science and Applications, 14(2), 822–832. https://doi.org/10.14569/IJACSA.2023.0140294

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free