Robot Law
Show Less

Robot Law

Edited by Ryan Calo, A. Michael Froomkin and Ian Kerr

Robot Law brings together exemplary research on robotics law and policy – an area of scholarly inquiry responding to transformative technology. Expert scholars from law, engineering, computer science and philosophy provide original contributions on topics such as liability, warfare, domestic law enforcement, personhood, and other cutting-edge issues in robotics and artificial intelligence. Together the chapters form a field-defining look at an area of law that will only grow in importance.
Buy Book in Print
Show Summary Details
You do not have access to this content

Chapter 13: Asleep at the switch? How killer robots become a force multiplier of military necessity

Ian Kerr and Katie Szilagyi


An autonomous military robot – or “killer robot” – has the potential to be a better, stronger, and faster soldier. Using robots rather than humans as soldiers in military warfare could result in fewer casualties by reducing the need for frontline human soldiers and by effectively using ethical programming. The authors assert that killer robots are “force multipliers,” with the potential for destructiveness and fatalities increasing dramatically with their development. As a result, under the framework of international humanitarian law, the use of autonomous lethal robots has the ability to change our own perceptions of “necessity” and “proportionality.” We must proceed carefully before deploying killer robots, according to the authors. The current state of military robotics is explored, showing that the military may soon decide that current scenarios requiring a “human in the loop” may soon be obsolete. The authors examine the philosophical underpinnings and implications of the current international humanitarian law’s purportedly “technology-neutral” approach. The chapter explains how the introduction of new military technology can reshape norms within military culture and change international humanitarian legal standards. Recognition of the philosophical underpinnings and implications of international humanitarian law is necessary: The “technology-neutral” approach encourages and accommodates the development and use of emerging technologies. Without this recognition, unjustifiable, lethal operations may be fallaciously treated as though they were a military necessity. The introduction of lethal autonomous robots can result in shifting battle norms by amplifying the amount of permissible destructive force in carrying out an operation. If we are “asleep at the switch,” we may forget that by permitting certain technology, we also permit it to determine its own use through technologically shaped perceived necessities. Given the amplified new forms of destructive, lethal force that killer robots bring, international humanitarian law may not be the best, and particularly not the only, way to regulate autonomous military robots. The authors hope this discussion creates space for alternative conceptions of regulating military use of lethal autonomous robots.

You are not authenticated to view the full text of this chapter or article.

Elgaronline requires a subscription or purchase to access the full text of books or journals. Please login through your library system or with your personal username and password on the homepage.

Non-subscribers can freely search the site, view abstracts/ extracts and download selected front matter and introductory chapters for personal use.

Your library may not have purchased all subject areas. If you are authenticated and think you should have access to this title, please contact your librarian.

Further information

or login to access all content.