This chapter is about the relationship between AI technology and society in fundamental rights theory. In fundamental rights doctrine, the relationship between technology and society is seldom reflected. Legal practitioners tend to view technology as a black box. For scholars of science and technology studies (STS), similarly, the law is a closed book. Such reductionist or compartmentalised thinking in the law and social sciences must be overcome if a conceptualisation of AI technology in fundamental rights theory is to be successful. The chapter offers a perspective on these issues that is based on a re-interpretation of affordance theory (as originally framed in STS). First, the question ‘how do affordances come into a technology?’ is answered from the viewpoint of Bryan Pfaffenberger’s ‘technological drama’. Accordingly, the affordances (the possibilities and constraints of a technology) are shaped in a dialogue between a ‘design constituency’ and an ‘impact constituency’ in which the technology’s materiality and sociality are co-determined. Second, this theory is applied to study the co-determination of AI technology. Finally affordance theory is combined with socio-legal theorising that understands fundamental rights as social institutions bundling normative expectations about individual and social autonomies. How do normative expectations about the affordances of AI technology emerge and how are they constitutionalised?
Browse by title
You are looking at 1 - 10 of 90,958 items
Christoph B. Graber
Kieron O’Hara and Mireille Hildebrandt
This chapter contains a crossing of swords and thoughts between the editors, who come from different disciplinary backgrounds and different philosophical traditions, but nevertheless occupy much common ground. The conversation is too short to enable the cutting edge of Occam’s razor, but refers to other work with more extensive argumentation. We agree on a great deal. In particular, we share a precautionary approach that requires proactive consideration of how one’s experimental business models or progressive politics may impact others. However, as the reader will see, at that point we part company! The ensuing dialogue has been illuminating for us, and hopefully will whet the reader’s appetite for the excellent chapters that follow.
Kieron O’Hara and Mark Garnett
Political issues pertaining to data-driven agency and the use of ‘big data’ to make decisions about people’s lives are usually seen through the lens of liberalism. A conservative examination of data-driven agency requires a different lens. This chapter adopts the perspective of evolving modernity. It considers the philosophy of three major conservative thinkers, Edmund Burke, Alexis de Tocqueville and Michael Oakeshott, in the context of the problematisation of big data contained in Mireille Hildebrandt’s Smart Technologies and the End(s) of Law. Present-day conservatives need to rethink their traditional antipathy to the state, reverting to a Burkean understanding of the public-private distinction, and also to revise views of individual agency in the face of the facilitation of collective agency by networked digital technology.
If data-driven agency is a form of agency based on what machines have learned, it seems important to understand the nature and limit of the type of knowledge that can be mechanically obtained from digital data. After reviewing some of the popular claims made about big data this chapter explores some of the differences in the use of big data and machine science in the natural sciences and in the social domain. It insists in particular on the fact that in the natural sciences what constitutes data and how it should be interpreted are under the collective jurisdiction of specialists of the domain whose authority is recognized by governments, funding agencies and the general public, while in the social domain the data is often claimed to be simply ‘found’ though it is explicitly sought for a variety of reasons. It is not however ‘crafted’ in the sense of being validated and authenticated by the community of concerned researchers. In consequence, anyone who has the necessary technical competence gains the authority to interpret the data and declare what the data proves. Finally, the chapters analyzes some aspects of machine learning and science that tend to encourage the faulty interpretation that ‘data is enough’.
Gerard de Vries
This chapter discusses the threat digital technologies might pose to democracy by shifting the focus to the question on which points, and why, democracy might be vulnerable to digital technologies. Two conceptions of democracy are considered: a minimalistic one, which defines democracy as the form of government in which the power resides in the people and is exercised by them either directly or by means of elected representatives; and a relationist one, defended by Montesquieu and Dewey, which defines democracy by the way power is exercised under the rule of law. Although digital technologies may put society in disarray, under the minimalistic definition, democracy is found to be not at risk. However, for disregarding the conditions under which democracy can continue to function properly, the minimal conception is found to suffer from ‘the fallacy of misplaced concreteness’. In contrast, the Montesquieu/Dewey conception allows to identify several vulnerabilities – both on the level of electoral law and the electoral system, and on the level of the way ‘public interest’ is organized – which, if not properly addressed, may annihilate the liberty achieved by the rule of law. Some policy-options for managing these vulnerabilities are suggested.
Julie E. Cohen
For several hundred years, political philosophers and legal theorists have conceptualized media technologies as ‘technologies of freedom’. Some things about that equation have not changed; certainly, access to information, the capacity for reason, self-determination, and democratic self-government are inescapably interrelated. In other respects, however, the operation of contemporary platform-based media infrastructures has begun to mimic the operation of the collection of brain structures that mid-twentieth-century neurologists christened the limbic system and that play vital roles in a number of precognitive functions, including emotion, motivation, and habit-formation. Today’s networked information flows are gradually being optimized for subconscious, affective appeal, and those choices have proved powerful in ways their designers likely did not intend or expect.
Following a summary of central elements of Hildebrandt’s arguments, I take up three major intersections between Hildebrandt and (1) Media and Communication Studies (MCS) and (2) Information and Computing Ethics (ICE). First, Medium Theory (MT) helps clarify and support her understanding of the complex relationship between print technology and the Rule of Law. Second, I introduce virtue ethics in ICE in order to show how Hildebrandt’s accounts overlap with key components of this ethical framework: virtue ethics thus powerfully reinforces and enhances certain aspects of Hildebrandt’s normative accounts and argument. Finally, I suggest how Hildebrandt’s accounts and argumentative trajectory likewise cohere with an emerging shift from a digital to a post-digital era (MCS). These three intersections both strengthen her arguments and show how her work is deeply relevant to not only legal scholarship and philosophy, but also to Media and Communication Studies and Information and Computing Ethics.
Despite serious reservations over issues of transparency, accountability, bias, and the like, algorithms offer a potentially significant contribution to furthering human well-being via the influencing of beliefs, desires, and choices. Should governments be permitted to leverage socially beneficial attitudes, or enhance the well-being of their citizens via the use of algorithmic tools? In this chapter I argue that there are principled moral reasons that do not permit governments to shape the ends of individuals in this way, even when it would contribute a positive benefit to well-being. Such shaping would undermine the kinds of ethical independence that state legitimacy is based upon. However, I also argue that this does not apply to what Rawls calls a ‘sense of justice’ – the dispositions necessary to uphold just political and socioeconomic institutions. Where traditional methods of influence, such as education, prove lacking, then algorithmic enhancement towards those ends may be permissible. Mireille Hildebrandt’s fictitious piece of computational software – ‘Toma’ – serves as the point of departure for this argument, and provides many of the insights regarding the autonomic nature of such influence.
Niels van Dijk
The European Parliament has recently proposed to grant robots the special legal status of electronic personhood to directly attribute them liability for damage they have caused. The proposal moves this idea from science fiction to possible legal reality. This chapter will reflect upon the underlying notion of personhood by exploring a variety of ways in which persons have been used as doubles for individuals: dramatic persons as masks on stage, juristic persons as fictions with effects, political persons unifying a multitude, average persons as statistical realities, profile persons as machine-generated group portraits, and digital persons as individual data portraits or smart agents. We will make a profile for each of these modes of personification to study the diverse ways persons have been given conceptual meaning and visual sense. This juxtaposition will put each type in contrast to find differences between their prominent attributes, pertaining to: the means by which they are composed, the actors wearing the masks (representors), what can be done with them (affordances), and the representative relation between person and subject. These contrasts can then in turn be used to judge the new entry of the electronic person in the existent hall of masks.
Mireille Hildebrandt and Kieron O’Hara
This chapter introduces the core topics of this volume, providing a hopefully appetizing overview of the chapters and their interrelations.