You are looking at 1 - 10 of 12 items

  • Author or Editor: Mark Chinen x
Clear All Modify Search
This content is available to you

Mark Chinen

This content is available to you

Mark Chinen

Although we might be years from developing machines with intelligence that matches ours, current developments in artificial intelligence and in machines and systems driven by it make it likely that autonomous technologies (machines and systems that can perform relatively sophisticated tasks without human supervision) will likely be present in almost every domain of human life and in our daily experience. These technologies will likely pose a challenge to our existing forms of legal responsibility. We use the law to meet various societal goals and to express values held by that society, and like all cultural tools, law has various uses, can be used in unexpected ways, and can be used on itself to alter it. The issue is whether law will be able to address large and complex systems of humans, business entities, and machines and systems that work together, in particular when harms occur.

You do not have access to this content

Mark Chinen

Over the past several years, legal scholars have engaged in detailed applications of current legal doctrines to problems that are expected to arise when autonomous technology is subjected to the legal doctrines of torts, negligence and products liability, strict liability, privacy, contract, and the law of war. How well current law is poised to address harms caused by autonomous machines and systems falls along a range that depends in part on the sophistication of machines, the degree of human control over them, and the kind of harm involved. However, as autonomous technologies grow more sophisticated, a point is reached under our common understanding of explanation and blame where it becomes at least as plausible to say that the machines and systems themselves are at least partly liable for any harms they cause as it is to say that human beings associated with them are responsible. There are of course other ways to address harms, primarily through forms of governance that range from private ordering to direct legislation and regulation, but each of them has their strengths and weaknesses.

You do not have access to this content

Mark Chinen

As autonomous technologies are involved in harms, there could be increasing pressure to hold a larger number of people responsible for such harms. To the extent that currently existing law fails to adequately address harms caused by autonomous technologies, one reason is that the law tends to avoid responsibility by association. The legal doctrines used to frame and address harms are informed almost exclusively by the paradigm of the responsible human individual, so that even when the law purports to encompass groups of people, including business entities, the analysis tends to be framed in individualistic terms. This dovetails with generally accepted understandings of moral responsibility, which can be traced from Aristotle to the present day. This influence is felt in the legal concepts such as culpability, foreseeability, and causation. The stress on individual responsibility and the strong tie to ethics raise questions about how well legal responsibility based on personal culpability will fare when it is applied to technology with high degrees of autonomy.

You do not have access to this content

Mark Chinen

Within ethics, the literature on the moral responsibility of groups is most relevant to the problems of associative responsibility. That literature provides some guidance on whether it is coherent to ascribe responsibility to groups; if so, which types of groups of might be subject to responsibility; how the responsibility of the group can be distributed to its members; and the “pragmatics” of ascribing responsibility to groups. At the same time, in part because of differences between law and ethics, and because of the nature of the problem, the literature does not provide completely satisfactory answers. Concepts from complexity theory suggest that the problem might be intractable. This is because the “behaviors” of groups might be the nonlinear, emergent phenomena that arise from the complex interactions of individuals and subgroups. This creates an argument that it is hard, if not impossible, to trace causal lines between the actions of individuals and what happens at the group level. In such situations, any responsibility we attribute to an individual for what happens at the group level would necessarily be a fiction. If so, it calls into question our need to rely on the individualistic conception of moral and legal responsibility. Perhaps responsibility can be modified or reconceived to better account for responsibility in general and to better address harms caused by autonomous machines and systems in particular.

You do not have access to this content

Mark Chinen

One strategy to address the issues raised by the legal and moral emphasis on individual responsibility is to refine or alter the concept of responsibility itself. Such attempts come from a number of perspectives. One is to rely more on concepts that underlie strict liability. Another is to disaggregate responsibility into its various components, for example, responsibility understood as identifying an agent as contributing to a particular harm on the one hand and having the agent suffer consequences because of that contribution on the other. This strategy is amenable to another approach that centers more on the victim of harm than on the perpetrator. An approach focusing on the victim provides a natural segue to systems like commercial and social insurance. It is almost certain that insurance will be used to compensate injured parties and will play an important role in responding to harm caused by autonomous machines. At the same time, insurance has its limitations because of the inherent problem of moral hazard and adverse selection for commercial insurance and because it does not perform the more punitive or retributive functions of holding someone responsible for a harm. Similarly, attempts to emphasize other aspects of responsibility and to deemphasize its more punitive aspects will succeed only to the extent that we are willing to forego blame and punishment when harms occur.

You do not have access to this content

Mark Chinen

Another way to alter responsibility is to extend agency beyond the human individual. A growing awareness of how permeable the concept of responsibility is, in part because technology itself has the potential to affect the way we understand ourselves and our own agency, could make legal doctrines based on associational responsibility more acceptable or make us more open to viewing ourselves as working in tandem with artificial agents. However, even if those modifications result in the expansion of ethical and legal subjects capable of bearing responsibility, for the foreseeable future, it will be human beings that will bear the brunt of that responsibility, not the autonomous machines and systems with which they are associated. The issues discussed regarding group responsibility again make themselves felt. The end result is a meaningful possibility that there will be gaps in responsibility.

You do not have access to this content

Mark Chinen

The impasses discussed at the conclusions of Chapters 5 and 6 (as well as the simple desire to avoid liability) in part motivate the second strategy to close possible gaps between machines and harms caused by them: to reduce harm by designing autonomous machines and systems that “obey” the law. At this point, of course, autonomous technologies are not cognizant of the law and do not subjectively “appreciate” or “value” it; all we can do is program machines and systems to operate in ways that conform to the law. That challenge is daunting because of the nature of the law itself, but researchers are trying. As they do so, this raises issues as to who is competent to interpret the law. In this regard, work on using artificial intelligence in the practice of law sets an outer bound for how sophisticated machines and systems might be developed to take the law into account in their behaviors, although it is an open question whether we would want artificial technologies to have that level of legal sophistication. In any event, if designers succeed in developing machines and systems that comply with the law, we might find that they will set standards of care that humans will not be able to reach without the help of those very technologies.

You do not have access to this content

Mark Chinen

People might find it in their interest to develop autonomous technologies that conform to expectations of appropriate behavior, either because on the one hand, technical limitations in natural-language processing and the features of the law will make it hard, if not impossible, to design machines and systems that follow the law, or because on the other hand, we might not want them to be the equivalent of lawyers or judges. If we cannot be confident in the strategy of designing law-abiding technology, we might try developing moral machines, machines that will engage in prosocial behaviors and that will be susceptible to the consequences of legal responsibility, thus preserving, albeit in a different form, the paradigmatic model of individual responsibility. This strategy raises a number of a number of technical and policy issues, such as whether it is possible to design technologies that “think” ethically, whose values will be chosen, who will solve moral dilemmas such as the Trolley Problem, and how to ensure that any “norms” that machines and systems might derive align with our own. The attempt to design moral machines and systems also raises the question whether such technologies themselves can be morally responsible for their actions. Many argue that at this point in their development, artificial agents lack the capacity to bear responsibility, but others are exploring how they can be designed to be amenable to moral judgment, including forms of punishment.

You do not have access to this content

Mark Chinen

There is an unresolved debate as to whether autonomous machines and systems should be given legal status. The argument in favor of doing so is pragmatic: legal status would ensure that an artificial agent’s actions have legal effect and could serve as a form of protection for parties that deal with the agent. Others, however, are concerned that legal obligations and legal status entail legal rights, and they take the view that such rights should not be given to autonomous technologies, even the most sophisticated technologies. The question whether artificial agents should be given legal rights moves easily to the question whether they deserve moral consideration, or patiency. The debate centers around issues such as whether there are meaningful differences between humans and autonomous machines, the possibility (or impossibility) of machine consciousness, and what effect treating artificial agents will have on the way we treat each other. The debate is unlikely to be resolved, in part because these issues involve unanswered, perhaps unanswerable, questions about ourselves and how we should treat each other. Further, we will simply need more experience with autonomous technologies, particularly the most sophisticated of them, before we can begin to fully understand the scope of the problem and what appropriate responses to it might be. However, even though the debate remains open, despite the concerns raised by some commentators about this, the propensity of humans to anthropomorphize machines and the efforts of some developers to encourage this tendency will likely tip the balance in favor of granting at least the most sophisticated technologies legal rights and moral consideration.