Chapter 12: Editors reflections
Restricted access

Facial recognition software used for identity verification in unemployment benefits determinations have been working inconsistently. Algorithms being used to screen tenant applications have been accused of entrenching housing discrimination. Data sets employed to train computer vision models were found to have racist and misogynistic labels for people of Black and Asian descent. As artificial intelligence (AI) and machine learning (ML) become increasingly common, particularly in otherwise-mundane public service delivery, we are constantly reminded that these systems are frail and brittle, and prone to authoritatively articulating human bias. Unrepresentative data, failures in goal optimisation, and adversarial attacks all contribute to failure modes that might lead these systems to automatically and surreptitiously discriminate against vulnerable data subject populations.

You are not authenticated to view the full text of this chapter or article.

Access options

Get access to the full article by using one of the access options below.

Other access options

Redeem Token

Institutional Login

Log in with Open Athens, Shibboleth, or your institutional credentials

Login via Institutional Access

Personal login

Log in with your Elgar Online account

Login with you Elgar account
Monograph Book