Beyond the ML Model: Applying Safety Engineering Frameworks to Text-to-Image Development

9Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Identifying potential social and ethical risks in emerging machine learning (ML) models and their applications remains challenging. In this work, we applied two well-established safety engineering frameworks (FMEA, STPA) to a case study involving text-to-image models at three stages of the ML product development pipeline: data processing, integration of a T2I model with other models, and use. Results of our analysis demonstrate the safety frameworks - both of which are not designed explicitly examine social and ethical risks - can uncover failure and hazards that pose social and ethical risks. We discovered a broad range of failures and hazards (i.e., functional, social, and ethical) by analyzing interactions (i.e., between different ML models in the product, between the ML product and user, and between development teams) and processes (i.e., preparation of training data or workflows for using an ML service/product). Our findings underscore the value and importance of examining beyond an ML model in examining social and ethical risks, especially when we have minimal information about an ML model.

Cite

CITATION STYLE

APA

Rismani, S., Shelby, R., Smart, A., Delos Santos, R., Moon, Aj., & Rostamzadeh, N. (2023). Beyond the ML Model: Applying Safety Engineering Frameworks to Text-to-Image Development. In AIES 2023 - Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (pp. 70–83). Association for Computing Machinery, Inc. https://doi.org/10.1145/3600211.3604685

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free