Safety constraints and ethical principles in collective decision making systems

4Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The future will see autonomous machines acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Think of self-driving cars, companion robots, and medical diagnosis support systems. We also believe that humans and machines will often need to work together and agree on common decisions. Thus hybrid collective decision making systems will be in great need. In this scenario, both machines and collective decision making systems should follow some form of moral values and ethical principles (appropriate to where they will act but always aligned to humans’), as well as safety constraints. In fact, humans would accept and trust more machines that behave as ethically as other humans in the same environment. Also, these principles would make it easier for machines to deter- mine their actions and explain their behavior in terms understandable by humans. Moreover, often machines and humans will need to make deci- sions together, either through consensus or by reaching a compromise. This would be facilitated by shared moral values and ethical principles.

Cite

CITATION STYLE

APA

Rossi, F. (2015). Safety constraints and ethical principles in collective decision making systems. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9324, pp. 3–15). Springer Verlag. https://doi.org/10.1007/978-3-319-24489-1_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free