Behind the Mask: Demographic bias in name detection for PII masking

4Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many datasets contain personally identifiable information, or PII, which poses privacy risks to individuals. PII masking is commonly used to redact personal information such as names, addresses, and phone numbers from text data. Most modern PII masking pipelines involve machine learning algorithms. However, these systems may vary in performance, such that individuals from particular demographic groups bear a higher risk for having their personal information exposed. In this paper, we evaluate the performance of three off-the-shelf PII masking systems on name detection and redaction. We generate data using names and templates from the customer service domain. We find that an open-source RoBERTa-based system shows fewer disparities than the commercial models we test. However, all systems demonstrate significant differences in error rate based on demographics. In particular, the highest error rates occurred for names associated with Black and Asian/Pacific Islander individuals.

Cite

CITATION STYLE

APA

Mansfield, C., Paullada, A., & Howell, K. (2022). Behind the Mask: Demographic bias in name detection for PII masking. In LTEDI 2022 - 2nd Workshop on Language Technology for Equality, Diversity and Inclusion, Proceedings of the Workshop (pp. 76–89). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.ltedi-1.10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free