Taking principles seriously: A hybrid approach to value alignment in artificial intelligence

20Citations
Citations of this article
59Readers
Mendeley users who have this article in their library.

Abstract

An important step in the development of value alignment (VA) systems in artificial intelligence (AI) is understanding how VA can reect valid ethical principles. We propose that designers of VA systems incorporate ethics by utilizing a hybrid approach in which both ethical reasoning and empirical observation play a role. This, we argue, avoids committing "naturalistic fallacy,"which is an attempt to derive "ought"from "is,"and it provides a more adequate form of ethical reasoning when the fallacy is not committed. Using quantified modal logic, we precisely formulate principles derived from deontological ethics and show how they imply particular "test propositions"for any given action plan in an AI rule base. The action plan is ethical only if the test proposition is empirically true, a judgment that is made on the basis of empirical VA. This permits empirical VA to integrate seamlessly with independently justified ethical principles.

Cite

CITATION STYLE

APA

Kim, T. W., Hooker, J., & Donaldson, T. (2021). Taking principles seriously: A hybrid approach to value alignment in artificial intelligence. Journal of Artificial Intelligence Research, 70, 871–890. https://doi.org/10.1613/JAIR.1.12481

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free