Browse by title

You are looking at 1 - 10 of 20 items :

  • Legal Philosophy x
  • Public Policy x
Clear All
You do not have access to this content

Sinziana M. Gutiu

Technology has the power to redefine how women are stereotypically depicted in society. Unfortunately, sex robots – or “sexbots” – replicate existing gender imbalances, as the technology circumvents consent and emulates female sexual slavery. This chapter explains how sex robots can impact the existing gender inequalities and the understanding of consent in sexual interactions between humans. Sexbots, some of which have already entered the market to varying degrees of functionality and human-like responsiveness, will all have one thing in common – consent will be irrelevant. Sexbots do not have the ability to decline, criticize, or become dissatisfied with the user. In fact, female sexual slavery is eroticized. The author describes the technology used to create sexbots, their function, and society’s reaction to sexbots, finding that misogynistic beliefs can be found in the design of, use of, and reactions to female robots. Sexbots diminish the social and legal importance of consent by commoditizing females in the context of a uni-directional sexual relationship. Female robots – or “gynoids” – are often assigned stereotypical female traits, which reproduce unrealistic ideas about women’s physical traits and disposition. The inequalities portrayed by gynoids also risk being appropriated by the technology, with gynoids becoming racialized and sexualized. The misogynistic reactions to female robots existing today are a genuine concern, and may further entrench such stereotypes about human females. The documented harms of extreme pornography, the expected harms of sexbots, and the legal concepts of harm under Canadian and U.S. legal systems are discussed. The author concludes that future legal solutions must consider sexbots’ impact on gender inequality.
You do not have access to this content

Edited by Ryan Calo, A. Michael Froomkin and Ian Kerr

Robot Law brings together exemplary research on robotics law and policy – an area of scholarly inquiry responding to transformative technology. Expert scholars from law, engineering, computer science and philosophy provide original contributions on topics such as liability, warfare, domestic law enforcement, personhood, and other cutting-edge issues in robotics and artificial intelligence. Together the chapters form a field-defining look at an area of law that will only grow in importance.
You do not have access to this content

AJung Moon, Ergun Calisgan, Camilla Bassani, Fausto Ferreira, Fiorella Operto and Gianmarco Veruggio

The increased development and deployment of robotics has brought with it a growing expression of ethical, legal and societal implication (ELSI) concerns among designers and the public. There is a need to break down current boundaries in roboethics discussions and to broaden stakeholder dialogues on these issues. The “Open Roboethics initiative” (ORi) is an open-source, Internet-based resource to drive discussion of roboethics, policy, and design. The authors envision ORi to grow into a dynamic online platform where various stakeholders can connect and engage. This synergy between bottom-up roboethics discussions and open sharing of robot designs is proposed to accelerate policy and design changes in robotics. Bottom-up approaches are among the most inclusive and useful in addressing applied ethics issues as social norms evolve. They aim to understand the opinions and perceived values of the public or stakeholder groups, while being managed and supported by an “experts committee.” Open source models encourage developers to share designs or source code and to distribute their technical contribution to the worldwide community for free. Established open source initiatives are discussed, showing examples of success and providing lessons on the practical issues of an open, online community for advancing roboethics. To demonstrate how ORi works, the authors present a case study involving a humanoid robot, PR2, who needs to ride an elevator to fulfill a delivery task, putting the robot in conflict with existing elevator patrons. In this case, ORi uses an online discussion space to crowd-source the perceptions of participants about cultural norms, expectations, social conventions and ELSI issues. The modules uses Q-learning, a machine learning technique, to select an appropriate behavior based on each unique situation, given pilot online survey responses. This case study demonstrates how the ORi concept can be particularly effective and illustrates some challenges that may occur in its implementation. The authors believe ORi will serve as a catalyst for discussions within and across nations and organizations regarding robot technology design and policy changes.
You do not have access to this content

Bryant Walker Smith

Lawyers and engineers can, and should, speak to each other in the same language. Both the law and engineering are concerned with the actual use of the products they create or regulate. They engage in similar concepts and terms and have interconnecting roles. Yet confusion and inconsistencies can lead to a regulator’s system boundaries being wholly incongruous with a developer’s system. This chapter emphasizes the importance of four concepts – systems, language, use, and users – to the development, regulation, and safety of robots. To guide the discussion, the author uses motor vehicle automation as an example and references a number of technical documents. The author finds that defining a system’s boundaries is a key conceptual challenge. Inconsistency in the use of language – particularly in the use of the terms control, risk, safety, reasonableness, efficiency, and responsibility – leads to unnecessary confusion. Furthermore, there is no uniform understanding of “safety” from a technical, much less legal, perspective. The author discusses how several concepts and terms are susceptible to numerous meanings, and suggests more effective uses of these concepts. Developers and regulators have interconnecting roles in ensuring the safety of robots and must thoughtfully coordinate the technical and legal domains without conflating them. Additionally, humans should be understood as part of the systems themselves, as they remain a key part of the design and use of automated systems. The systems analysis introduced in this chapter reveals the conceptual, linguistic, and practical difficulties that developers and regulators will confront on the path of increasing automation. Sensibly defining automated systems requires a thoughtful dialogue between legal and technical domains in the same robot language.
You do not have access to this content

Peter Asaro

The emergence of new technologies can and does challenge many of our existing assumptions and traditional interpretations of the law. In particular, the development of new military technologies has led to the emergence of new international humanitarian law (IHL). States are rapidly moving toward sophisticated military weapons with increasing degrees of automation and information processing. Such technologies dramatically transform the capabilities, behaviors, or effects of state action, and can undermine the basic assumptions of customary or treaty law. Existing law falls short in protecting the collective interests of the states. This chapter asks how an emerging technology might necessitate new law, how new law ought to be formed, and what the emergent norms ought to be in these processes. In regulating potentially disruptive technologies, there continues to be debate about the appropriate relationship between the emergence of new technological capabilities, new norms, and new laws. The author explains how the evolution of international law ought to be shaped by moral considerations. The legacy of the Martens Clause is an explicit recognition of the role of moral consideration in the application of IHL and the formulation of new law. This chapter considers the legal framework and means by which new law, jus nascendi, could come into place for new robotic technologies. The author discusses some of the philosophical issues that arise, and asserts that critics’ concerns over autonomous weapons should not be limited to the norms of discriminate use, proportionate use, and protection of civilians. The current debate ought to focus on the threats posed to responsibility, accountability, human rights, and human dignity. The principle of “meaningful human control” over the use of violent force in armed conflict is presented as an example of an emerging normative principle concerning the development of autonomous weapons.
This content is available to you

A. Michael Froomkin

This content is available to you

Edited by Ryan Calo, A. Michael Froomkin and Ian Kerr

This content is available to you

Neil M. Richards and William D. Smart

Today’s robots are leaving the research lab and coming to the consumer market. Yet many existing robots are not designed to interact with humans. Even the Roomba sees a human leg and a table leg as indistinguishable. While research labs are still the primary home for robots, they can provide us with an exciting glimpse of future robot applications in the real world. This chapter provides an overview of the conceptual issues and possible implications surrounding law, robots, and robotics. First, the authors offer a definition of robots as nonbiological autonomous agents: one that requires agency in the physical world, but only requires a subjective notion of agency or “apparent agency.” The authors then explore the capabilities of robots, noting what they do today and projecting what robots might be able to do in the future. The authors argue that we should look to the lessons of cyberlaw in developing and examining the metaphors for robots we use to shape the law. One key lesson is that if we get the metaphors wrong for robots, the outcome could be disastrous. The idea that robots are “just like people” – “the Android Fallacy” – should be entirely and outright rejected, according to the authors. Robots are tools, despite the fact that people, including lawmakers, tend to anthropomorphize robots with perceived human characteristics. Misunderstanding a new technology, in this case, anthropomorphizing analogies of robots, can have real, pernicious effects for legislative design and should be avoided.
You do not have access to this content

Kate Darling

Humans have a tendency to anthropomorphize robots, but we are also experiencing an increase in robots specifically designed to engage with us socially. A “social robot” is a physically embodied, autonomous agent that communicates with humans through social cues, learning adaptively and mimicking human social states. If we perceive social robots as life-like things, the authors assert that our behavior toward them should be regulated. The authors draw the analogy of animal abuse regulation, which is justified in part because animal abuse prevents human behavior that is also harmful in other contexts. The level of human attachment to a robot can be based on the interplay of three factors: physicality, perceived autonomous movement, and social behavior. These factors make certain robots elicit emotional reactions from people similar to how we react to animals or other people. Such robots target our involuntary biological responses and generate a projection of intent and sentiment onto the robots’ behavior. This is particularly strong when the robot exhibits a “caregiver effect.” The authors discuss concerns that disseminating social robots undermines the value of authenticity in society, replacing human social interactions, and the increasing the dangers of manipulation and invasions of privacy. There are, however, some extremely positive social uses of social robots, particularly in the areas of health and education. Preventing robot abuse would protect societal values, prevent traumatization, and prevent desensitization. Social robot abuse protection laws could effectively follow the analogy of animal abuse protection laws. The authors define social robots as an embodied object with a defined degree of autonomous behavior that is specifically designed to interact with humans on a social level, but note that “mistreatment” remains to be defined. The authors also briefly comment on the property law impacts and discuss when the appropriate time would be to start regulating our treatment of social robots.
You do not have access to this content

Kristen Thomasen

The combination of human-computer interaction (“HCI”) technology with sensors that monitor human physiological responses offers state agencies improved methods for extracting truthful information from suspects during interrogations. These technologies have recently been implemented in prototypes of automated kiosks, which allow an individual to interact with an avatar interrogator. The HCI system uses a combination of visual, auditory, infrared and other sensors to monitor a suspect’s eye movements, voice, and various other qualities throughout an interaction. The information is then aggregated and analyzed to determine whether the suspect is being deceptive. This chapter argues that this type of application poses serious risks to individual rights such as privacy and the right to silence. The study concludes by suggesting that courts, developers, and state agencies institute limits on how and what information this emerging technology can collect from the humans who engage with it.