Guidance documents for technology governance and data protection often use broad terms such as Artificial Intelligence (AI). This is problematic; the term 'AI' is inherently ambiguous, and it is difficult to tease out the nuances in the 'grey areas' between AI techniques and/or automated decision-making (ADM) processes. We use four illustrative examples to demonstrate that the categorisation gives only partial information about each system's risk profile. We argue that organisations should adopt risk-oriented approaches to identify system risks that extend beyond technology classification as AI or non-AI. Organisational governance processes should entail a more holistic assessment of system risk: rather than relying on 'top-down' categorisations of the technologies employed, they should apply a 'bottom-up' risk identification process that enables a more effective identification of appropriate controls and mitigation strategies.
Institutional Login
Log in with Open Athens, Shibboleth, or your institutional credentials
Personal login
Log in with your Elgar Online account