Towards Fair Supervised Dataset Distillation for Text Classification

0Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

Abstract

With the growing prevalence of large-scale language models, their energy footprint and potential to learn and amplify historical biases are two pressing challenges. Dataset distillation (DD) - a method for reducing the dataset size by learning a small number of synthetic samples which encode the information in the original dataset - is a method for reducing the cost of model training, however its impact on fairness has not been studied. We investigate how DD impacts on group bias in the context of text classification tasks, with experiments over two data sets, concluding that vanilla DD preserves the bias of the dataset. We then show how existing debiasing methods can be combined with DD to produce models that are fair and accurate, at reduced training cost.

Cite

CITATION STYLE

APA

Han, X., Shen, A., Li, Y., Frermann, L., Baldwin, T., & Cohn, T. (2022). Towards Fair Supervised Dataset Distillation for Text Classification. In SustaiNLP 2022 - 3rd Workshop on Simple and Efficient Natural Language Processing, Proceedings of the Workshop (pp. 65–72). Association for Computational Linguistics (ACL).

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free