Restricted access

AI systems are commonly believed to be able to aid in more objective decision-making and, eventually, to make objective decisions of their own. However, such belief is riddled with fallacies, which are based on an overly simplistic approach to organizational decision-making. Based on an ethnography of the Dutch police, we demonstrate that making decisions with AI requires practical explanations that go beyond an analysis of the computational methods used to generate predictions, to include an entire ecology of unbounded, open-ended interactions and interdependencies. In other words, explaining AI is ecological. Yet, this typically goes unnoticed. We argue that this is highly problematic, as it is through acknowledging this ecology that we can recognize that we are not, and never will be, making objective decisions with AI. If we continue to ignore the ecology of explaining AI, we end up reinforcing, and potentially even further stigmatizing, existing societal categories.

You are not authenticated to view the full text of this chapter or article.

Access options

Get access to the full article by using one of the access options below.

Other access options

Redeem Token

Institutional Login

Log in with Open Athens, Shibboleth, or your institutional credentials

Login via Institutional Access

Personal login

Log in with your Elgar Online account

Login with your Elgar account