Robot Law

Robot Law

Edited by Ryan Calo, A. Michael Froomkin and Ian Kerr

Robot Law brings together exemplary research on robotics law and policy – an area of scholarly inquiry responding to transformative technology. Expert scholars from law, engineering, computer science and philosophy provide original contributions on topics such as liability, warfare, domestic law enforcement, personhood, and other cutting-edge issues in robotics and artificial intelligence. Together the chapters form a field-defining look at an area of law that will only grow in importance.

Chapter 3: The application of traditional tort theory to embodied machine intelligence

Curtis E. A. Karnow

Subjects: innovation and technology, technology and ict, law - academic, internet and technology law, law and society, legal philosophy, legal theory, public international law, terrorism and security law, politics and public policy, public policy, terrorism and security


The goal of increasing robot autonomy, or “machine IQ,” is to produce robots that can make real-time decisions in unpredictable environments in order to fulfill a set task. These robots, by definition, will take unpredictable or “unforeseeable” actions in the physical world they share with humans in order to fulfill the human-assigned task. Traditional tort theories of negligence and strict liability are insufficient to impose liability on the legal entities that sell or employ truly autonomous robots. The author defines “truly autonomous robots” as robots that embody machine learning. Where an autonomous robot makes an unpredictable move in order to attain a human-specified goal, liability would not attach to the manufacturer if these changes or methods made after product delivery to the consumer were unforeseeable. Foreseeability is an essential characteristic of the three types of product liability – failure to warn, design defect, and manufacturing defect – and ultra-hazardous activity theory is unlikely to assist us unless we are willing to say that all robotic actions are routinely, foreseeably, dangerous. The author discusses how the 1997 American Law Institute’s Restatement (Third) of Torts shifted the analysis of design defect theory from a consumer expectation analysis to a reasonable alternative design (RAD) test. Nonetheless, the focus continues to be on foreseeable risks: a type of predictable harm to a predictable group of potential victims. Two developments may assist this problem: using a common-sense approach applied to robots akin to the “reasonable person” analysis in tort law, and an increased ability to predict autonomous robot behavior as we continue to interact with them, such the development of reasonable expectations (and rights) regarding robot activity.

You are not authenticated to view the full text of this chapter or article.

Elgaronline requires a subscription or purchase to access the full text of books or journals. Please login through your library system or with your personal username and password on the homepage.

Non-subscribers can freely search the site, view abstracts/ extracts and download selected front matter and introductory chapters for personal use.

Your library may not have purchased all subject areas. If you are authenticated and think you should have access to this title, please contact your librarian.

Further information