The AI landscape demands a broad set of legal, ethical, and societal considerations to be accounted for in order to develop ethical AI (eAI) solutions which sustain human values and rights. Currently, a variety of guidelines and a handful of niche tools exist to account for and tackle individual challenges. However, it is also well established that many organizations face practical challenges in navigating these considerations from a risk management perspective within AI governance. Therefore, new methodologies are needed to provide a well-vetted and real-world applicable structure and path through the checks and balances needed for ethically assessing and guiding the development of AI. In this paper, we show that a multidisciplinary research approach, spanning cross-sectional viewpoints, is the foundation of a pragmatic definition of ethical and societal risks faced by organizations using AI. Equally important are the findings of cross-structural governance for implementing eAI successfully. Based on evidence acquired from our multidisciplinary research investigation, we propose a novel data-driven risk assessment methodology, entitled DRESS-eAI. In addition, through the evaluation of our methodological implementation, we demonstrate its state-of-the-art relevance as a tool for sustaining human values in the data-driven AI era.
CITATION STYLE
Felländer, A., Rebane, J., Larsson, S., Wiggberg, M., & Heintz, F. (2022). Achieving a Data-Driven Risk Assessment Methodology for Ethical AI. Digital Society, 1(2). https://doi.org/10.1007/s44206-022-00016-0
Mendeley helps you to discover research relevant for your work.