Browse by title

You are looking at 1 - 10 of 31 items :

  • Innovation and Technology x
  • Terrorism and Security x
Clear All
You do not have access to this content

F. Patrick Hubbard

As they become increasingly mobile, sophisticated robots will transform the way we live. They will have higher levels of connectivity, autonomy, and intelligence. They will also have the potential to cause serious bodily harm to individuals. The existing legal system is an efficient, fair system to provide compensation to those injured by robots and correctly balances the need for innovation with the concern for physical safety. The chapter first discusses the need for technological innovation and summarizes current approaches to safety design. Currently, liability law attempts to balance the concern for physical safety with the desire for innovation. It does so by making sellers liable for injuries caused by a failure to use a safer approach where it costs less than the injuries it prevents. Sophisticated robots undoubtedly present difficulties for allocating responsibility for injuries on the basis of fault. Robots may have emergent or unpredictable learned behavior, interconnection with other sophisticated technology and systems, and use technology made from multiple suppliers of hardware and software. These issues can be addressed by current legal doctrines through existing liability analysis and supported by the use of expert testimony. The author recommends that innovators should design machines with product safety analyses in mind, provide warnings, push for both private and governmental standards, and decide on the appropriate mix of product liability insurance and self-insurance for their products. Proposals for alternative systems, such as no-fault insurance schemes or limiting liability through immunity or pre-emption, assume that the current system is problematic and that it should be addressed in a way that abandons the concern for balance. The current liability-based system for product-caused injury is balanced, fair, efficient, and flexible enough to adapt to the increased sophistication of robots.
You do not have access to this content

Diana Marina Cooper

Open robots may be more problematic than their closed counterparts from a legal and ethical perspective. The lack of ability to constrain use of the technology in certain downstream applications makes it difficult for open robot manufacturers to minimize unethical use of their technologies. Some form of intervention is required if the industry is to adopt a “sufficiently open” model and fulfill the goal of achieving “a robot in every home.” This chapter makes the case for adopting a licensing approach that deviates from traditional open licences by imposing certain restrictions on downstream modification and use, in order to allocate liability between manufacturers and users. The author explores the obstacles to mainstream adoption of a “sufficiently open” model, including the concerns about physical harm, social harm, and privacy implications. Proposed measures to overcome these barriers have included providing selective immunity to manufacturers and distributors of open robots. The author suggests that Ryan Calo’s proposal, to grant selective immunity to open robot manufacturers, be supplemented with a licensing approach to regulation. Readers are presented with the Ethical Robot License (ERL), a preliminary license draft that acts as a starting point in the discussion of ethical licensing of open robots. The proper scope of the licence, operationalization of the license, and analogous contexts demonstrating where ethics have been infused into commercial transactions, are outlined and discussed. This approach aims to reduce the scope and scale of harmful and unethical use of open robots through the establishment of obligations and restrictions on downstream applications and operating environments. The license allocates liability, requires insurance, and allows for increased remedies.
You do not have access to this content

Curtis E. A. Karnow

The goal of increasing robot autonomy, or “machine IQ,” is to produce robots that can make real-time decisions in unpredictable environments in order to fulfill a set task. These robots, by definition, will take unpredictable or “unforeseeable” actions in the physical world they share with humans in order to fulfill the human-assigned task. Traditional tort theories of negligence and strict liability are insufficient to impose liability on the legal entities that sell or employ truly autonomous robots. The author defines “truly autonomous robots” as robots that embody machine learning. Where an autonomous robot makes an unpredictable move in order to attain a human-specified goal, liability would not attach to the manufacturer if these changes or methods made after product delivery to the consumer were unforeseeable. Foreseeability is an essential characteristic of the three types of product liability – failure to warn, design defect, and manufacturing defect – and ultra-hazardous activity theory is unlikely to assist us unless we are willing to say that all robotic actions are routinely, foreseeably, dangerous. The author discusses how the 1997 American Law Institute’s Restatement (Third) of Torts shifted the analysis of design defect theory from a consumer expectation analysis to a reasonable alternative design (RAD) test. Nonetheless, the focus continues to be on foreseeable risks: a type of predictable harm to a predictable group of potential victims. Two developments may assist this problem: using a common-sense approach applied to robots akin to the “reasonable person” analysis in tort law, and an increased ability to predict autonomous robot behavior as we continue to interact with them, such the development of reasonable expectations (and rights) regarding robot activity.
You do not have access to this content

Ian Kerr and Katie Szilagyi

An autonomous military robot – or “killer robot” – has the potential to be a better, stronger, and faster soldier. Using robots rather than humans as soldiers in military warfare could result in fewer casualties by reducing the need for frontline human soldiers and by effectively using ethical programming. The authors assert that killer robots are “force multipliers,” with the potential for destructiveness and fatalities increasing dramatically with their development. As a result, under the framework of international humanitarian law, the use of autonomous lethal robots has the ability to change our own perceptions of “necessity” and “proportionality.” We must proceed carefully before deploying killer robots, according to the authors. The current state of military robotics is explored, showing that the military may soon decide that current scenarios requiring a “human in the loop” may soon be obsolete. The authors examine the philosophical underpinnings and implications of the current international humanitarian law’s purportedly “technology-neutral” approach. The chapter explains how the introduction of new military technology can reshape norms within military culture and change international humanitarian legal standards. Recognition of the philosophical underpinnings and implications of international humanitarian law is necessary: The “technology-neutral” approach encourages and accommodates the development and use of emerging technologies. Without this recognition, unjustifiable, lethal operations may be fallaciously treated as though they were a military necessity. The introduction of lethal autonomous robots can result in shifting battle norms by amplifying the amount of permissible destructive force in carrying out an operation. If we are “asleep at the switch,” we may forget that by permitting certain technology, we also permit it to determine its own use through technologically shaped perceived necessities. Given the amplified new forms of destructive, lethal force that killer robots bring, international humanitarian law may not be the best, and particularly not the only, way to regulate autonomous military robots. The authors hope this discussion creates space for alternative conceptions of regulating military use of lethal autonomous robots.
You do not have access to this content

Lisa A. Shay, Woodrow Hartzog, John Nelson, Dominic Larkin and Gregory Conti

The time has come for a cohesive approach to automated law enforcement. The ubiquity of sensors, advances in computerized analysis and robotics, and widespread adoption of networked technologies have paved the way for the combination of sensor systems with law-enforcement algorithms and punishment feedback loops. While in the past, law enforcement was manpower intensive and moderated by the discretion of the police officer on the beat, automated systems scale efficiently, allow meticulous enforcement of the law, provide rapid dispatch of punishment and offer financial incentives to law-enforcement agencies, governments, and purveyors of these systems. Unfortunately, laws were not created with such broad attempts at enforcement in mind and the future portends significant harms to society where many types of violations, particularly minor infractions, can be enforced with unprecedented rigor. This chapter provides a framework for analysis of automated law-enforcement systems that includes a conceptualization of automated law enforcement as the process of automating some or all aspects of surveillance, analysis, and enforcement in an iterative feedback loop. We demonstrate how intended and unintended consequences can result from the automation of any stage in this process and provide a list of issues that must be considered in any automated law enforcement scheme. Those deploying automated law-enforcement schemes should be extremely cautious to ensure that the necessary calculus has been performed and adequate safeguards have been incorporated to minimize the potential for public harm while preserving the benefits of automation.
This content is available to you

Edited by Ryan Calo, A. Michael Froomkin and Ian Kerr

This content is available to you

Edited by Ryan Calo, A. Michael Froomkin and Ian Kerr

This content is available to you

Edited by Ryan Calo, A. Michael Froomkin and Ian Kerr

You do not have access to this content

Jason Millar and Ian Kerr

Some day we may come to rely on robotic prediction machines in place of human experts. Google’s search engine, IBM’s Watson, and the Google Driverless Car (GDC) project each give an idea of what that world will look like. Yet actually letting go of the wheel may be a tough sell to humans. Will we really delegate human tasks to expert machine systems and what will be the outcomes of those choices? This chapter suggests that in the near future we will have to make difficult decisions about whether to relinquish some control to robots. The normative pull of “evidence-based practice” and the development of Watson-like robots will leave us few reasons to remain in control of expert decisions where robots excel. Thus we will have to choose between either accepting the relative fallibility of human experts and remaining in total control or deciding to relinquish some control to robots for the greater good. If we do relinquish some control to robots, there are important questions about the justification to do so with highly specialized expert tasks and how that would that bear on the determination of responsibility, particularly in cases of disagreement. On the other hand, if we choose to remain in control and advocate the status quo, we may deliver less than optimal outcomes relative to what “co-robotics” might achieve. Cases of disagreement between human and robot experts, generally favor delegation to robots, but also provide time for human experts to understand and make decisions on the underlying rationale for the disagreement. Watson and the GDC are able to achieve high degrees of something like “expertise” by acting on sets of rules that underdetermine their success. By describing Watson-like robots as “experts,” rather than merely “tools,” we realize a philosophical gain that accounts for both a robot’s unique abilities and social meaning.
You do not have access to this content

Lisa A. Shay, Woodrow Hartzog, John Nelson and Gregory Conti

Due to recent advances in computerized analysis and robotics, automated law enforcement has become technically feasible.  Unfortunately, laws were not created with automated enforcement in mind and even seemingly simple laws have subtle features that require programmers to make assumptions when encoding them. We demonstrate this ambiguity with an experiment where a group of 52 programmers was assigned the task of automating traffic speed limit enforcement.  A late-model vehicle was equipped with a sensor that collected actual vehicle speed over a one-hour commute.  Each programmer (without collaboration) wrote a program that computed the number of speed limit violations and issued mock tickets.  Despite quantitative data for both vehicle speed and the speed limit, the number of tickets issued varied from none to one per sensor sample above the speed limit. Our results from the experiment highlight the significant deviation in number and type of citations issued, based on legal interpretations and assumptions made by programmers without legal training.  These deviations were mitigated, but not eliminated, in one sub-group that was provided with a legally reviewed software design specification, providing insight into ways to automate the law in the future. Automation of legal reasoning seems to be the most effective in contexts where legal conclusions are predictable because little room exists for choice in a given model; that is, they are determinable. Yet this experiment demonstrates that even relatively narrow and straightforward “rules” are problematically indeterminate in practice.