Law and Autonomous Machines
Show Less

Law and Autonomous Machines

The Co-evolution of Legal Responsibility and Technology

Mark Chinen

This book sets out a possible trajectory for the co-development of legal responsibility on the one hand and artificial intelligence and the machines and systems driven by it on the other. As autonomous technologies become more sophisticated it will be harder to attribute harms caused by them to the humans who design or work with them. This will put pressure on legal responsibility and autonomous technologies to co-evolve. Mark Chinen illustrates how these factors strengthen incentives to develop even more advanced systems, which in turn strengthens nascent calls to grant legal and moral status to autonomous machines. This book is a valuable resource for scholars and practitioners of legal doctrine, ethics, and autonomous technologies.
Buy Book in Print
Show Summary Details
You do not have access to this content

Chapter 8: Moral machines and systems

Mark Chinen

Abstract

People might find it in their interest to develop autonomous technologies that conform to expectations of appropriate behavior, either because on the one hand, technical limitations in natural-language processing and the features of the law will make it hard, if not impossible, to design machines and systems that follow the law, or because on the other hand, we might not want them to be the equivalent of lawyers or judges. If we cannot be confident in the strategy of designing law-abiding technology, we might try developing moral machines, machines that will engage in prosocial behaviors and that will be susceptible to the consequences of legal responsibility, thus preserving, albeit in a different form, the paradigmatic model of individual responsibility. This strategy raises a number of a number of technical and policy issues, such as whether it is possible to design technologies that “think” ethically, whose values will be chosen, who will solve moral dilemmas such as the Trolley Problem, and how to ensure that any “norms” that machines and systems might derive align with our own. The attempt to design moral machines and systems also raises the question whether such technologies themselves can be morally responsible for their actions. Many argue that at this point in their development, artificial agents lack the capacity to bear responsibility, but others are exploring how they can be designed to be amenable to moral judgment, including forms of punishment.

You are not authenticated to view the full text of this chapter or article.

Elgaronline requires a subscription or purchase to access the full text of books or journals. Please login through your library system or with your personal username and password on the homepage.

Non-subscribers can freely search the site, view abstracts/ extracts and download selected front matter and introductory chapters for personal use.

Your library may not have purchased all subject areas. If you are authenticated and think you should have access to this title, please contact your librarian.


Further information

or login to access all content.