Safety Intelligence and Legal Machine Language: Do We Need the Three Laws of Robotics?

  • Weng Y
  • Chen C
  • Su C
N/ACitations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

In this chapter we will describe a legal framework for Next Generation Robots (NGRs) that has safety as its central focus. The framework is offered in response to the current lack of clarity regarding robot safety guidelines, despite the development and impending release of tens of thousands of robots into workplaces and homes around the world. We also describe our proposal for a safety intelligence (SI) concept that addresses issues associated with open texture risk for robots that will have a relatively high level of autonomy in their interactions with humans. Whereas Isaac Asimov’s Three Laws of Robotics are frequently held up as a suitable foundation for creating an artificial moral agency for ensuring robot safety, here we will explain our skepticism that a model based on those laws is sufficient for that purpose. In its place we will recommend an alternative legal machine language (LML) model that uses non-verbal information from robot sensors and actuators to protect both humans and robots. To implement a LML model, robotists must design a biomorphic nerve reflex system, and legal scholars must define safety content for robots that have limited “self- awareness.”

Cite

CITATION STYLE

APA

Weng, Y.-H., Chen, C.-H., & Su, C.-T. (2008). Safety Intelligence and Legal Machine Language: Do We Need the Three Laws of Robotics? In Service Robot Applications. InTech. https://doi.org/10.5772/6057

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free