Chapter 16: Defining the scope of AI ADM system risk assessment
Restricted access

Guidance documents for technology governance and data protection often use broad terms such as Artificial Intelligence (AI). This is problematic; the term 'AI' is inherently ambiguous, and it is difficult to tease out the nuances in the 'grey areas' between AI techniques and/or automated decision-making (ADM) processes. We use four illustrative examples to demonstrate that the categorisation gives only partial information about each system's risk profile. We argue that organisations should adopt risk-oriented approaches to identify system risks that extend beyond technology classification as AI or non-AI. Organisational governance processes should entail a more holistic assessment of system risk: rather than relying on 'top-down' categorisations of the technologies employed, they should apply a 'bottom-up' risk identification process that enables a more effective identification of appropriate controls and mitigation strategies.

You are not authenticated to view the full text of this chapter or article.

Access options

Get access to the full article by using one of the access options below.

Other access options

Redeem Token

Institutional Login

Log in with Open Athens, Shibboleth, or your institutional credentials

Login via Institutional Access

Personal login

Log in with your Elgar Online account

Login with your Elgar account