How do Hugging Face Models Document Datasets, Bias, and Licenses? An Empirical Study

22Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Pre-trained Machine Learning (ML) models help to create ML-intensive systems without having to spend conspicuous resources on traimng a new model from the ground up. However, the lack of transparency for such models could lead to undesired consequences in terms of bias, fairness, trustworthiness of the underlying data, and, potentially even legal implications. Taking as a case study the transformer models hosted by Hugging Face, a popular hub for pre-trained ML models, this paper empirically investigates the transparency of pre-trained transformer models. We look at the extent to which model descriptions (i) specify the datasets being used for their pre-training, (ii) discuss their possible training bias, (iii) declare their license, and whether projects using such models take these licenses into account. Results indicate that pre-trained models still have a limited exposure of their traimng datasets, possible biases, and adopted licenses. Also, we found several cases of possible licensing violations by client projects. Our findings motivate further research to improve the transparency of ML models, which may result in the definition, generation, and adoption of Artificial Intelligence Bills of Materials.

Cite

CITATION STYLE

APA

Pepe, F., Nardone, V., Mastropaolo, A., Canfora, G., Bavota, G., & Penta, M. D. (2024). How do Hugging Face Models Document Datasets, Bias, and Licenses? An Empirical Study. In IEEE International Conference on Program Comprehension (pp. 370–381). IEEE Computer Society. https://doi.org/10.1145/3643916.3644412

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free