Machine medical ethics: When a human is delusive but the machine has its wits about him

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

When androids take care of delusive patients, ethic-epistemic concerns crop up about an agency’s good intent and why we would follow its advice. Robots are not human but may deliver correct medical information, whereas Alzheimer patients are human but may be mistaken. If humanness is not the question, then do we base our trust on truth? True is what logically can be verified given certain principles, which you have to adhere to in the first place. In other words, truth comes full circle. Does it come from empirical validation, then? That is a hard one too because we access the world through our biased sense perceptions and flawed measurement tools. We see what we think we see. Probably, the attribution of ethical qualities comes from pragmatics: If an agency affords delivering the goods, it is a “good” agency. If that happens regularly and in a predictable manner, the agency becomes trustworthy. Computers can be made more predictable than Alzheimer patients and in that sense, may be considered morally “better” than delusive humans. That is, if we ignore the existence of graded liabilities. That is why I developed a responsibility self-test that can be used to navigate the moral mine field of ethical positions that evolves from differently weighing or prioritizing the principles of autonomy, non-maleficence, beneficence, and justice.

Cite

CITATION STYLE

APA

Hoorn, J. F. (2015). Machine medical ethics: When a human is delusive but the machine has its wits about him. Intelligent Systems, Control and Automation: Science and Engineering, 74, 233–254. https://doi.org/10.1007/978-3-319-08108-3_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free