Privacy-Preserving and Trustless Verifiable Fairness Audit of Machine Learning Models

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

In the big data era, machine learning has developed prominently and is widely used in real-world systems. Yet, machine learning raises fairness concerns, which incurs discrimination against groups determined by sensitive attributes such as gender and race. Many researchers have focused on developing fairness audit technique of machine learning model that enable users to protect themselves from discrimination. Existing solutions, however, rely on additional external trust assumptions, either on third-party entities or external components, that significantly lower the security. In this study, we propose a trustless verifiable fairness audit framework that assesses the fairness of ML algorithms while addressing potential security issues such as data privacy, model secrecy, and trustworthiness. With succinctness and non-interactive of zero knowledge proof, our framework not only guarantees audit integrity, but also clearly enhance security, enabling fair ML models to be publicly auditable and any client to verify audit results without extra trust assumption. Our evaluation on various machine learning models and real-world datasets shows that our framework achieves practical performance.

References Powered by Scopus

Membership Inference Attacks Against Machine Learning Models

3101Citations
N/AReaders
Get full text

How to prove yourself: Practical solutions to identification and signature problems

2570Citations
N/AReaders
Get full text

Fairness through awareness

2456Citations
N/AReaders
Get full text

Cited by Powered by Scopus

BCDA: A blockchain-based dynamic auditing scheme for intelligent IoT

1Citations
N/AReaders
Get full text

Unfair Trojan: Targeted Backdoor Attacks Against Model Fairness

0Citations
N/AReaders
Get full text

Zero-Knowledge Proof-Based Verifiable Decentralized Machine Learning in Communication Network: A Comprehensive Survey

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Tang, G., Tan, W., & Cai, M. (2023). Privacy-Preserving and Trustless Verifiable Fairness Audit of Machine Learning Models. International Journal of Advanced Computer Science and Applications, 14(2), 822–832. https://doi.org/10.14569/IJACSA.2023.0140294

Readers over time

‘23‘24‘2502468

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 2

67%

Lecturer / Post doc 1

33%

Readers' Discipline

Tooltip

Computer Science 1

33%

Economics, Econometrics and Finance 1

33%

Business, Management and Accounting 1

33%

Save time finding and organizing research with Mendeley

Sign up for free
0