Facial recognition software used for identity verification in unemployment benefits determinations have been working inconsistently. Algorithms being used to screen tenant applications have been accused of entrenching housing discrimination. Data sets employed to train computer vision models were found to have racist and misogynistic labels for people of Black and Asian descent. As artificial intelligence (AI) and machine learning (ML) become increasingly common, particularly in otherwise-mundane public service delivery, we are constantly reminded that these systems are frail and brittle, and prone to authoritatively articulating human bias. Unrepresentative data, failures in goal optimisation, and adversarial attacks all contribute to failure modes that might lead these systems to automatically and surreptitiously discriminate against vulnerable data subject populations.
Institutional Login
Log in with Open Athens, Shibboleth, or your institutional credentials
Personal login
Log in with your Elgar Online account