Quality-Diversity Generative Sampling for Learning with Synthetic Data

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Generative models can serve as surrogates for some real data sources by creating synthetic training datasets, but in doing so they may transfer biases to downstream tasks. We focus on protecting quality and diversity when generating synthetic training datasets. We propose quality-diversity generative sampling (QDGS), a framework for sampling data uniformly across a user-defined measure space, despite the data coming from a biased generator. QDGS is a model-agnostic framework that uses prompt guidance to optimize a quality objective across measures of diversity for synthetically generated data, without fine-tuning the generative model. Using balanced synthetic datasets generated by QDGS, we first debias classifiers trained on color-biased shape datasets as a proof-of-concept. By applying QDGS to facial data synthesis, we prompt for desired semantic concepts, such as skin tone and age, to create an intersectional dataset with a combined blend of visual features. Leveraging this balanced data for training classifiers improves fairness while maintaining accuracy on facial recognition benchmarks. Code available at: https://github.com/Cylumn/qd-generative-sampling.

Cite

CITATION STYLE

APA

Chang, A., Fontaine, M. C., Booth, S., Matarić, M. J., & Nikolaidis, S. (2024). Quality-Diversity Generative Sampling for Learning with Synthetic Data. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 19805–19812). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i18.29955

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free