You are looking at 1 - 2 of 2 items

  • Author or Editor: Neil M. Richards x
Clear All Modify Search
This content is available to you

Neil M. Richards and William D. Smart

Today’s robots are leaving the research lab and coming to the consumer market. Yet many existing robots are not designed to interact with humans. Even the Roomba sees a human leg and a table leg as indistinguishable. While research labs are still the primary home for robots, they can provide us with an exciting glimpse of future robot applications in the real world. This chapter provides an overview of the conceptual issues and possible implications surrounding law, robots, and robotics. First, the authors offer a definition of robots as nonbiological autonomous agents: one that requires agency in the physical world, but only requires a subjective notion of agency or “apparent agency.” The authors then explore the capabilities of robots, noting what they do today and projecting what robots might be able to do in the future. The authors argue that we should look to the lessons of cyberlaw in developing and examining the metaphors for robots we use to shape the law. One key lesson is that if we get the metaphors wrong for robots, the outcome could be disastrous. The idea that robots are “just like people” – “the Android Fallacy” – should be entirely and outright rejected, according to the authors. Robots are tools, despite the fact that people, including lawmakers, tend to anthropomorphize robots with perceived human characteristics. Misunderstanding a new technology, in this case, anthropomorphizing analogies of robots, can have real, pernicious effects for legislative design and should be avoided.

You do not have access to this content

Neil M. Richards and Jonathan H. King

In our inevitable big data future, critics and sceptics argue that privacy will have no place. We disagree. When properly understood, privacy rules will be an essential and valuable part of our digital future, especially if we wish to retain the human values on which our political, social and economic institutions have been built. In this chapter we make three simple points. First, we need to think differently about ‘privacy’. Privacy is not merely about keeping secrets, but about the rules we use to regulate information, which is and always has been in intermediate states between totally secret and known to all. Privacy rules are information rules, and in an information society, information rules are inevitable. Second, human values rather than privacy for privacy’s sake should animate our information rules. These must include protections for identity, equality, security and trust. Third, we argue that privacy in our big data future can and must be secured in a variety of ways. Formal legal regulation will be necessary, but so too will ‘soft’ regulation by entities like the Federal Trade Commission, and by the development of richer notions of big data ethics.