The growing use of data-driven decision systems based on Artificial Intelligence (AI) by governments, companies and social organizations has given more attention to the challenges they pose to society. Over the last few years, news about discrimination appeared on social media, and privacy, among others, highlighted their vulnerabilities. Despite all the research around these issues, the definition of concepts inherent to the risks and/or vulnerabilities of data-driven decision systems is not consensual. Categorizing the dangers and vulnerabilities of data-driven decision systems will facilitate ethics by design, ethics in design and ethics for designers to contribute to responsible AI. The main goal of this work is to understand which types of AI risks/ vulnerabilities are Ethical and/or Technological and the differences between human vs machine classification. We analyze two types of problems: (i) the risks/ vulnerabilities classification task by humans; and (ii) the risks/vulnerabilities classification task by machines. To carry out the analysis, we applied a survey to perform human classification and the BERT algorithm in machine classification. The results show that even with different levels of detail, the classification of vulnerabilities is in agreement in most cases.
CITATION STYLE
Teixeira, S., Veloso, B., Rodrigues, J. C., & Gama, J. (2023). Ethical and Technological AI Risks Classification: A Human Vs Machine Approach. In Communications in Computer and Information Science (Vol. 1752 CCIS, pp. 150–166). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-23618-1_10
Mendeley helps you to discover research relevant for your work.