Browse by title

You are looking at 1 - 10 of 25 items :

  • Innovation and Technology x
  • Terrorism and Security Law x
  • Terrorism and Security x
  • Chapters/Articles x
Clear All Modify Search
You do not have access to this content

F. Patrick Hubbard

As they become increasingly mobile, sophisticated robots will transform the way we live. They will have higher levels of connectivity, autonomy, and intelligence. They will also have the potential to cause serious bodily harm to individuals. The existing legal system is an efficient, fair system to provide compensation to those injured by robots and correctly balances the need for innovation with the concern for physical safety. The chapter first discusses the need for technological innovation and summarizes current approaches to safety design. Currently, liability law attempts to balance the concern for physical safety with the desire for innovation. It does so by making sellers liable for injuries caused by a failure to use a safer approach where it costs less than the injuries it prevents. Sophisticated robots undoubtedly present difficulties for allocating responsibility for injuries on the basis of fault. Robots may have emergent or unpredictable learned behavior, interconnection with other sophisticated technology and systems, and use technology made from multiple suppliers of hardware and software. These issues can be addressed by current legal doctrines through existing liability analysis and supported by the use of expert testimony. The author recommends that innovators should design machines with product safety analyses in mind, provide warnings, push for both private and governmental standards, and decide on the appropriate mix of product liability insurance and self-insurance for their products. Proposals for alternative systems, such as no-fault insurance schemes or limiting liability through immunity or pre-emption, assume that the current system is problematic and that it should be addressed in a way that abandons the concern for balance. The current liability-based system for product-caused injury is balanced, fair, efficient, and flexible enough to adapt to the increased sophistication of robots.

You do not have access to this content

Diana Marina Cooper

Open robots may be more problematic than their closed counterparts from a legal and ethical perspective. The lack of ability to constrain use of the technology in certain downstream applications makes it difficult for open robot manufacturers to minimize unethical use of their technologies. Some form of intervention is required if the industry is to adopt a “sufficiently open” model and fulfill the goal of achieving “a robot in every home.” This chapter makes the case for adopting a licensing approach that deviates from traditional open licences by imposing certain restrictions on downstream modification and use, in order to allocate liability between manufacturers and users. The author explores the obstacles to mainstream adoption of a “sufficiently open” model, including the concerns about physical harm, social harm, and privacy implications. Proposed measures to overcome these barriers have included providing selective immunity to manufacturers and distributors of open robots. The author suggests that Ryan Calo’s proposal, to grant selective immunity to open robot manufacturers, be supplemented with a licensing approach to regulation. Readers are presented with the Ethical Robot License (ERL), a preliminary license draft that acts as a starting point in the discussion of ethical licensing of open robots. The proper scope of the licence, operationalization of the license, and analogous contexts demonstrating where ethics have been infused into commercial transactions, are outlined and discussed. This approach aims to reduce the scope and scale of harmful and unethical use of open robots through the establishment of obligations and restrictions on downstream applications and operating environments. The license allocates liability, requires insurance, and allows for increased remedies.

You do not have access to this content

Curtis E. A. Karnow

The goal of increasing robot autonomy, or “machine IQ,” is to produce robots that can make real-time decisions in unpredictable environments in order to fulfill a set task. These robots, by definition, will take unpredictable or “unforeseeable” actions in the physical world they share with humans in order to fulfill the human-assigned task. Traditional tort theories of negligence and strict liability are insufficient to impose liability on the legal entities that sell or employ truly autonomous robots. The author defines “truly autonomous robots” as robots that embody machine learning. Where an autonomous robot makes an unpredictable move in order to attain a human-specified goal, liability would not attach to the manufacturer if these changes or methods made after product delivery to the consumer were unforeseeable. Foreseeability is an essential characteristic of the three types of product liability – failure to warn, design defect, and manufacturing defect – and ultra-hazardous activity theory is unlikely to assist us unless we are willing to say that all robotic actions are routinely, foreseeably, dangerous. The author discusses how the 1997 American Law Institute’s Restatement (Third) of Torts shifted the analysis of design defect theory from a consumer expectation analysis to a reasonable alternative design (RAD) test. Nonetheless, the focus continues to be on foreseeable risks: a type of predictable harm to a predictable group of potential victims. Two developments may assist this problem: using a common-sense approach applied to robots akin to the “reasonable person” analysis in tort law, and an increased ability to predict autonomous robot behavior as we continue to interact with them, such the development of reasonable expectations (and rights) regarding robot activity.

You do not have access to this content

Ian Kerr and Katie Szilagyi

An autonomous military robot – or “killer robot” – has the potential to be a better, stronger, and faster soldier. Using robots rather than humans as soldiers in military warfare could result in fewer casualties by reducing the need for frontline human soldiers and by effectively using ethical programming. The authors assert that killer robots are “force multipliers,” with the potential for destructiveness and fatalities increasing dramatically with their development. As a result, under the framework of international humanitarian law, the use of autonomous lethal robots has the ability to change our own perceptions of “necessity” and “proportionality.” We must proceed carefully before deploying killer robots, according to the authors. The current state of military robotics is explored, showing that the military may soon decide that current scenarios requiring a “human in the loop” may soon be obsolete. The authors examine the philosophical underpinnings and implications of the current international humanitarian law’s purportedly “technology-neutral” approach. The chapter explains how the introduction of new military technology can reshape norms within military culture and change international humanitarian legal standards. Recognition of the philosophical underpinnings and implications of international humanitarian law is necessary: The “technology-neutral” approach encourages and accommodates the development and use of emerging technologies. Without this recognition, unjustifiable, lethal operations may be fallaciously treated as though they were a military necessity. The introduction of lethal autonomous robots can result in shifting battle norms by amplifying the amount of permissible destructive force in carrying out an operation. If we are “asleep at the switch,” we may forget that by permitting certain technology, we also permit it to determine its own use through technologically shaped perceived necessities. Given the amplified new forms of destructive, lethal force that killer robots bring, international humanitarian law may not be the best, and particularly not the only, way to regulate autonomous military robots. The authors hope this discussion creates space for alternative conceptions of regulating military use of lethal autonomous robots.

You do not have access to this content

Lisa A. Shay, Woodrow Hartzog, John Nelson, Dominic Larkin and Gregory Conti

The time has come for a cohesive approach to automated law enforcement. The ubiquity of sensors, advances in computerized analysis and robotics, and widespread adoption of networked technologies have paved the way for the combination of sensor systems with law-enforcement algorithms and punishment feedback loops. While in the past, law enforcement was manpower intensive and moderated by the discretion of the police officer on the beat, automated systems scale efficiently, allow meticulous enforcement of the law, provide rapid dispatch of punishment and offer financial incentives to law-enforcement agencies, governments, and purveyors of these systems. Unfortunately, laws were not created with such broad attempts at enforcement in mind and the future portends significant harms to society where many types of violations, particularly minor infractions, can be enforced with unprecedented rigor. This chapter provides a framework for analysis of automated law-enforcement systems that includes a conceptualization of automated law enforcement as the process of automating some or all aspects of surveillance, analysis, and enforcement in an iterative feedback loop. We demonstrate how intended and unintended consequences can result from the automation of any stage in this process and provide a list of issues that must be considered in any automated law enforcement scheme. Those deploying automated law-enforcement schemes should be extremely cautious to ensure that the necessary calculus has been performed and adequate safeguards have been incorporated to minimize the potential for public harm while preserving the benefits of automation.

You do not have access to this content

Jason Millar and Ian Kerr

Some day we may come to rely on robotic prediction machines in place of human experts. Google’s search engine, IBM’s Watson, and the Google Driverless Car (GDC) project each give an idea of what that world will look like. Yet actually letting go of the wheel may be a tough sell to humans. Will we really delegate human tasks to expert machine systems and what will be the outcomes of those choices? This chapter suggests that in the near future we will have to make difficult decisions about whether to relinquish some control to robots. The normative pull of “evidence-based practice” and the development of Watson-like robots will leave us few reasons to remain in control of expert decisions where robots excel. Thus we will have to choose between either accepting the relative fallibility of human experts and remaining in total control or deciding to relinquish some control to robots for the greater good. If we do relinquish some control to robots, there are important questions about the justification to do so with highly specialized expert tasks and how that would that bear on the determination of responsibility, particularly in cases of disagreement. On the other hand, if we choose to remain in control and advocate the status quo, we may deliver less than optimal outcomes relative to what “co-robotics” might achieve. Cases of disagreement between human and robot experts, generally favor delegation to robots, but also provide time for human experts to understand and make decisions on the underlying rationale for the disagreement. Watson and the GDC are able to achieve high degrees of something like “expertise” by acting on sets of rules that underdetermine their success. By describing Watson-like robots as “experts,” rather than merely “tools,” we realize a philosophical gain that accounts for both a robot’s unique abilities and social meaning.

You do not have access to this content

Lisa A. Shay, Woodrow Hartzog, John Nelson and Gregory Conti

Due to recent advances in computerized analysis and robotics, automated law enforcement has become technically feasible.  Unfortunately, laws were not created with automated enforcement in mind and even seemingly simple laws have subtle features that require programmers to make assumptions when encoding them. We demonstrate this ambiguity with an experiment where a group of 52 programmers was assigned the task of automating traffic speed limit enforcement.  A late-model vehicle was equipped with a sensor that collected actual vehicle speed over a one-hour commute.  Each programmer (without collaboration) wrote a program that computed the number of speed limit violations and issued mock tickets.  Despite quantitative data for both vehicle speed and the speed limit, the number of tickets issued varied from none to one per sensor sample above the speed limit. Our results from the experiment highlight the significant deviation in number and type of citations issued, based on legal interpretations and assumptions made by programmers without legal training.  These deviations were mitigated, but not eliminated, in one sub-group that was provided with a legally reviewed software design specification, providing insight into ways to automate the law in the future. Automation of legal reasoning seems to be the most effective in contexts where legal conclusions are predictable because little room exists for choice in a given model; that is, they are determinable. Yet this experiment demonstrates that even relatively narrow and straightforward “rules” are problematically indeterminate in practice.

You do not have access to this content

Kristen Thomasen

The combination of human-computer interaction (“HCI”) technology with sensors that monitor human physiological responses offers state agencies improved methods for extracting truthful information from suspects during interrogations. These technologies have recently been implemented in prototypes of automated kiosks, which allow an individual to interact with an avatar interrogator. The HCI system uses a combination of visual, auditory, infrared and other sensors to monitor a suspect’s eye movements, voice, and various other qualities throughout an interaction. The information is then aggregated and analyzed to determine whether the suspect is being deceptive. This chapter argues that this type of application poses serious risks to individual rights such as privacy and the right to silence. The study concludes by suggesting that courts, developers, and state agencies institute limits on how and what information this emerging technology can collect from the humans who engage with it.

You do not have access to this content

Kate Darling

Humans have a tendency to anthropomorphize robots, but we are also experiencing an increase in robots specifically designed to engage with us socially. A “social robot” is a physically embodied, autonomous agent that communicates with humans through social cues, learning adaptively and mimicking human social states. If we perceive social robots as life-like things, the authors assert that our behavior toward them should be regulated. The authors draw the analogy of animal abuse regulation, which is justified in part because animal abuse prevents human behavior that is also harmful in other contexts. The level of human attachment to a robot can be based on the interplay of three factors: physicality, perceived autonomous movement, and social behavior. These factors make certain robots elicit emotional reactions from people similar to how we react to animals or other people. Such robots target our involuntary biological responses and generate a projection of intent and sentiment onto the robots’ behavior. This is particularly strong when the robot exhibits a “caregiver effect.” The authors discuss concerns that disseminating social robots undermines the value of authenticity in society, replacing human social interactions, and the increasing the dangers of manipulation and invasions of privacy. There are, however, some extremely positive social uses of social robots, particularly in the areas of health and education. Preventing robot abuse would protect societal values, prevent traumatization, and prevent desensitization. Social robot abuse protection laws could effectively follow the analogy of animal abuse protection laws. The authors define social robots as an embodied object with a defined degree of autonomous behavior that is specifically designed to interact with humans on a social level, but note that “mistreatment” remains to be defined. The authors also briefly comment on the property law impacts and discuss when the appropriate time would be to start regulating our treatment of social robots.

This content is available to you

Neil M. Richards and William D. Smart

Today’s robots are leaving the research lab and coming to the consumer market. Yet many existing robots are not designed to interact with humans. Even the Roomba sees a human leg and a table leg as indistinguishable. While research labs are still the primary home for robots, they can provide us with an exciting glimpse of future robot applications in the real world. This chapter provides an overview of the conceptual issues and possible implications surrounding law, robots, and robotics. First, the authors offer a definition of robots as nonbiological autonomous agents: one that requires agency in the physical world, but only requires a subjective notion of agency or “apparent agency.” The authors then explore the capabilities of robots, noting what they do today and projecting what robots might be able to do in the future. The authors argue that we should look to the lessons of cyberlaw in developing and examining the metaphors for robots we use to shape the law. One key lesson is that if we get the metaphors wrong for robots, the outcome could be disastrous. The idea that robots are “just like people” – “the Android Fallacy” – should be entirely and outright rejected, according to the authors. Robots are tools, despite the fact that people, including lawmakers, tend to anthropomorphize robots with perceived human characteristics. Misunderstanding a new technology, in this case, anthropomorphizing analogies of robots, can have real, pernicious effects for legislative design and should be avoided.