Life and the Law in the Era of Data-Driven Agency
Show Less

Life and the Law in the Era of Data-Driven Agency

Edited by Mireille Hildebrandt and Kieron O’Hara

This ground-breaking and timely book explores how big data, artificial intelligence and algorithms are creating new types of agency, and the impact that this is having on our lives and the rule of law. Addressing the issues in a thoughtful, cross-disciplinary manner, leading scholars in law, philosophy, computer science and politics examine the ways in which data-driven agency is transforming democratic practices and the meaning of individual choice.
Buy Book in Print
Show Summary Details
You do not have access to this content

Chapter 11: Artificial intelligence, affordances and fundamental rights

Christoph B. Graber

Abstract

This chapter is about the relationship between AI technology and society in fundamental rights theory. In fundamental rights doctrine, the relationship between technology and society is seldom reflected. Legal practitioners tend to view technology as a black box. For scholars of science and technology studies (STS), similarly, the law is a closed book. Such reductionist or compartmentalised thinking in the law and social sciences must be overcome if a conceptualisation of AI technology in fundamental rights theory is to be successful. The chapter offers a perspective on these issues that is based on a re-interpretation of affordance theory (as originally framed in STS). First, the question ‘how do affordances come into a technology?’ is answered from the viewpoint of Bryan Pfaffenberger’s ‘technological drama’. Accordingly, the affordances (the possibilities and constraints of a technology) are shaped in a dialogue between a ‘design constituency’ and an ‘impact constituency’ in which the technology’s materiality and sociality are co-determined. Second, this theory is applied to study the co-determination of AI technology. Finally affordance theory is combined with socio-legal theorising that understands fundamental rights as social institutions bundling normative expectations about individual and social autonomies. How do normative expectations about the affordances of AI technology emerge and how are they constitutionalised?

You are not authenticated to view the full text of this chapter or article.

Elgaronline requires a subscription or purchase to access the full text of books or journals. Please login through your library system or with your personal username and password on the homepage.

Non-subscribers can freely search the site, view abstracts/ extracts and download selected front matter and introductory chapters for personal use.

Your library may not have purchased all subject areas. If you are authenticated and think you should have access to this title, please contact your librarian.


Further information

or login to access all content.