Recently, there has been a growing demand to address failures in the fairness of artificial intelligence (AI) systems. Current techniques for improving fairness in AI systems are focused on broad changes to the norms, procedures and algorithms used by companies that implement those systems. However, some organizations may require detailed methods to identify which user groups are disproportionately impacted by failures in specific components of their systems. Failure mode and effects analysis (FMEA) is a popular safety engineering method and is proposed here as a vehicle to support the conducting of “AI fairness impact assessments” in organizations. An extension to FMEA called “FMEA-AI” is proposed as a modification to a familiar tool for engineers and manufacturers that can integrate moral sensitivity and ethical considerations into a com- pany’s existing design process. Whereas current impact assessments focus on helping regulators identify an aggregate risk level for an entire AI system, FMEA-AI helps companies identify safety and fairness risk in multiple failure modes of an AI system. It also explicitly identifies user groups and considers an objective definition of fairness as proportional satisfaction of claims in calculating likelihood and severity of fairness-related failures. This proposed method can help industry analysts adapt a widely known safety engineering method to incorporate AI fairness considerations, promote moral sensitivity and overcome resistance to change. Keywords
CITATION STYLE
Li, J., & Chignell, M. (2022). FMEA-AI: AI fairness impact assessment using failure mode and effects analysis. AI and Ethics, 2(4), 837–850. https://doi.org/10.1007/s43681-022-00145-9
Mendeley helps you to discover research relevant for your work.