Abstract
Because AI is gradually moving into the position of decision-maker in business and organizations, its influence is increasingly impacting the outcomes and interests of the human end-user. As a result, scholars and practitioners alike have become worried about the ethical implications of decisions made where AI is involved. In approaching the issue of AI ethics, it is becoming increasingly clear that society and the business world—under the influence of the big technology companies—are accepting the narrative that AI has its own ethical compass, or, in other words, that AI can decide itself to do bad or good. We argue that this is not the case. We discuss and demonstrate that AI in itself has no ethics and that good or bad decisions by algorithms are caused by human choices made at an earlier stage. For this reason, we argue that even though technology is quickly becoming better and more sophisticated a need exists to simultaneously train humans even better in shaping their ethical compass and awareness.
Cite
CITATION STYLE
De Cremer, D., & Kasparov, G. (2022). The ethical AI—paradox: why better technology needs more and not less human responsibility. AI and Ethics, 2(1), 1–4. https://doi.org/10.1007/s43681-021-00075-y
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.