Chapter 4: The becoming of AI: a critical perspective on the contingent formation of AI
Restricted access

This chapter offers a critical perspective on the contingent formation of artificial intelligence as a key sociotechnical institution in contemporary societies. It shows how the development of AI is not merely a product of functional technological development and improvement but depends just as much on economic, political, and discursive drivers. It builds on work from STS and critical algorithm studies showing that technological developments are always contingent on and resulting from transformations along multiple scientific trajectories as well as the interaction between multiple actors and discourses. For our conceptual understanding of AI and its epistemology, this is a consequential perspective. It directs attention to different issues: away from detecting impact and bias ex post and toward a perspective that centers on how AI is coming into being as a powerful sociotechnical entity. We illustrate this process in three key domains: technological research, media discourse, and regulatory governance.

  • Bareis, J., & Katzenbach, C. (2021). Talking AI into being: The narratives and imaginaries of national AI strategies and their performative politics. Science, Technology, & Human Values, 01622439211030007. https://doi.org/10.1177/01622439211030007 Barnes, J. E., & Chin, J. (2018, March 2). The new arms race in AI - WSJ. https://www.wsj.com/articles/the-new-arms-race-in-ai-1520009261 Bechmann, A. (2017, January 4). Keeping it real: From faces and features to social values in deep learning algorithms on social media images. https://doi.org/10.24251/HICSS.2017.218 Beckert, J. (2016). Imagined futures. Harvard University Press. Bijker, W. E., & Law, J. (1994). Shaping technology/building society: Studies in sociotechnical change. MIT Press. Bijker, W. E. (1995). Of bicycles, bakelites, and bulbs: Toward a theory of sociotechnical change. MIT Press. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, FAccT ’21 (pp. 610–623). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922 Bloomfield, B. P. (1985). The culture of artificial intelligence. In B.P. Bloomfield (Ed.), The question of artificial intelligence (pp. 59–105). Routledge. Bolin, G., & Andersson Schwarz, J. (2015). Heuristics of the algorithm: Big Data, user interpretation and institutional translation. Big Data & Society, 2(2), 2053951715608406. https://doi.org/10.1177/2053951715608406 Bory, P. (2019). Deep new: The shifting narratives of artificial intelligence from Deep Blue to AlphaGo. Convergence: The International Journal of Research into New Media Technologies, 25(4), 627–642. https://doi.org/10.1177/1354856519829679 Brennen, J. S., Howard, P. N., & Nielsen, R. K. (2018). An industry-led debate: How UK media cover artificial intelligence. Reuters Institute for the Study of Journalism, University of Oxford. Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. The MIT Press. Brown, N., Rappert, B., & Webster, A. (2000). Introducing contested futures: From looking into the future to looking at the future. In N. Brown, B. Rappert, & A. Webster (Eds.), Contested futures: A sociology of prospective techno-science (pp. 3–20). Routledge. Brundage, M., & Bryson, J. (2016). Smart policies for artificial intelligence. ArXiv:1608.08196 [Cs]. http://arxiv.org/abs/1608.08196 Bucher, T. (2016). ‘Machines don’t have instincts’: Articulating the computational in journalism. New Media & Society, 1461444815624182. https://doi.org/10.1177/1461444815624182 Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15. Callon, M. (1984). Some elements of a sociology of translation: Domestication of the scallops and the fishermen of St Brieuc Bay. The Sociological Review, 32(Suppl 1), 196–233. https://doi.org/10.1111/j.1467-954X.1984.tb00113.x Campolo, A., & Crawford, K. (2020). Enchanted determinism: Power without responsibility in artificial intelligence. Engaging Science, Technology, and Society, 6, 1. https://doi.org/10.17351/ests2020.277 Cardon, D., Cointet, J.-P., & Mazières, A. (2018). Neurons spike back: The invention of inductive machines and the artificial intelligence controversy. Réseaux, 211(5), 173. https://doi.org/10.3917/res.211.0173 Casilli, A. A. (2017). Il n’y a pas d’algorithme. In L’appétit des géants: Pouvoir des algorithmes, ambitions des plateformes (pp. 10–19). C&F Editions. Casilli, A. A. (2019). En attendant les robots. Le Seuil. Chuan, C.-H., Tsai, W.-H. S., & Cho, S. Y. (2019). Framing artificial intelligence in American newspapers. In Proceedings of the 2019 AAAI/ACM Conference on AI, ethics, and society (pp. 339–344). https://doi.org/10.1145/3306618.3314285 Cohn, M. L. (2019). Keeping software present software as a timely object for STS studies of the digital. In J. Vertesi & D. Ribes (Eds.), DigitalSTS: A field guide for science & technology studies (pp. 423–446). Princeton University Press. Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398–415. https://doi.org/10.1080/21670811.2014.976411 Dourish, P. (2001). Where the action is: The foundations of embodied interaction. MIT Press. Elish, M. C., & boyd, danah. (2018). Situating methods in the magic of Big Data and AI. Communication Monographs, 85(1), 57–80. https://doi.org/10.1080/03637751.2017.1375130 Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., Hamilton, K., & Sandvig, C. (2015). ‘I always assumed that I wasn’t really that close to [her]’: Reasoning about invisible algorithms in news feeds. In Proceedings of the 33rd annual ACM conference on human factors in computing systems (pp. 153–162). https://doi.org/10.1145/2702123.2702556 Eurobarometer. (2014). Public perceptions of science, research and innovation. European Commission. Fischer, S., & Puschmann, C. (2021). Wie Deutschland über Algorithmen schreibt: Eine Analyse des Mediendiskurses über Algorithmen und Künstliche Intelligenz (2005–2020). https://doi.org/10.11586/2021003 GARTNER. (2018). 5 trends emerge in Gartner hype cycle for emerging technologies. Gartner. https://www.gartner.com/smarterwithgartner/5-trends-emerge-in-gartner-hype-cycle-for-emerging-technologies-2018 Geiger, R. S. (2014). Bots, bespoke code, and the materiality of software platforms. Information, Communication & Society, 17(3), 342–356. https://doi.org/10.1080/1369118X.2013.873069. Geiger, R. S. (2017). Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture. Big Data & Society, 4(2), 2053951717730735. https://doi.org/10.1177/2053951717730735 Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. Boczkowski, & K. Foot (Eds.), Media technologies. MIT University Press Group Ltd. Gillespie, T., & Seaver, N. (2015, November 5). Critical algorithm studies: A reading list. Social Media Collective. https://socialmediacollective.org/reading-lists/critical-algorithm-studies/ Gray, M. L., & Suri, S. (2019). Ghost work: How to stop silicon valley from building a new global underclass. Eamon Dolan/Houghton Mifflin Harcourt. Hepp, A. (2020). The fragility of curating a pioneer community: Deep mediatization and the spread of the quantified self and maker movements. International Journal of Cultural Studies, 23(6), 932–950. https://doi.org/10.1177/1367877920922867 Hilgartner, S. (2015). Capturing the imaginary: Vanguards, visions and the synthetic biology revolution Stephen Hilgartner. In S. Hilgartner, C. Miller, & R. Hagendijk (Eds.), Science and democracy (pp. 33–55). Routledge. https://doi.org/10.4324/9780203564370-7 Howcroft, D., Mitev, N., & Wilson, M. (2004). What we may learn from the social shaping of technology approach. In J. Mingers & L. Willcocks (Eds.), Social theory and philosophy of IS (pp. 329–371). John Wiley and Sons Ltd. https://www.escholar.manchester.ac.uk/uk-ac-man-scw:3b5065 Jobin, A. (2013, October 21). Google’s autocompletion: Algorithms, stereotypes and accountability. Sociostrategy. https://sociostrategy.com/2013/googles-autocompletion-algorithms-stereotypes-accountability/ Jobin, A., Prezioso, S., Glassey, O., & Kaplan, F. (2019a). La mémoire kaléidoscopique: l’histoire au prisme des algorithmes d’autocomplétion. Geschichte und Informatik/Histoire et informatique, 20, 83–101. https://doi.org/10.33057/chronos.1466 Jobin, A., Ienca, M., & Vayena, E. (2019b). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 389–399. https://doi.org/10.1038/s42256-019-0088-2 Joyce, K., Smith-Doerr, L., Alegria, S., Bell, S., Cruz, T., Hoffman, S. G., Noble, S. U., & Shestakofsky, B. (2021). Toward a sociology of artificial intelligence: A call for research on inequalities and structural change. Socius, 7, 2378023121999581. https://doi.org/10.1177/2378023121999581 Just, N., & Latzer, M. (2017). Governance by algorithms: Reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39(2), 238–258. https://doi.org/10.1177/0163443716 643157 Katzenbach, C. (2021). “AI will fix this” – The technical, discursive, and political turn to AI in governing communication. Big Data & Society, 8(2). https://doi.org/10.1177/20539517211046182 Latour, B. (1993). We have never been modern. Harvard University Press. Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press. Law, J., & Lin, W. Y. (2022). Care-ful research: Sensibilities from science and technology studies (STS). The SAGE Handbook of Qualitative Research Design. Lente, H. V. (2016). Forceful futures: From promise to requirement. In N. Brown, B. Rappert, & A. Webster (Eds.), Contested futures: A sociology of prospective techno-science. Routledge. Liebig, L., Guettel, L., Jobin, A., & Katzenbach, C. (2022). Subnational AI policy – Shaping AI in a multi-level governance system. AI & Society: Knowledge, Culture and Communication. https://doi.org/10.1007/s00146-022-01561-5 MacKenzie, D. A., & Wajcman, J. (Eds.). (1999). The social shaping of technology (2nd ed.). Open University Press. Mager, A., & Katzenbach, C. (2021). Future imaginaries in the making and governing of digital technology: Multiple, contested, commodified. New Media & Society, 23(2), 223–236. https://doi.org/10.1177/1461444820929321 Marchant, G. (2019). “Soft law” governance of artificial intelligence (AI PULSE). UCLA School of Law, The Program on Understanding Law, Science, and Evidence. https://escholarship.org/uc/item/0jq252ks Markoff, J. (2016). Machines of loving grace: The quest for common ground between humans and robots (Reprint Edition). Ecco. Milde, J. (2017). Forschungsfeld Wissenschaftskommunikation (H. Bonfadelli, B. Fähnrich, C. Lüthje, M. Rhomberg, & M. S. Schäfer, Eds.). Springer Fachmedien Wiesbaden. https://doi.org/10.1007/978-3-658-12898-2 Munn, L. (2022). The uselessness of AI ethics. AI and Ethics. https://doi.org/10.1007/s43681-022-00209-w Nabil Hassein. (2017, August 15). Against black inclusion in facial recognition. Decolonized Tech. https://decolonizedtech.com/2017/08/15/against-black-inclusion-in-facial-recognition/ Natale, S., & Ballatore, A. (2017). Imagining the thinking machine: Technological myths and the rise of artificial intelligence. Convergence: The International Journal of Research into New Media Technologies, 135485651771516. https://doi.org/10.1177/1354856517715164 Neyland, D. (2015). Bearing account-able witness to the ethical algorithmic system. Science, Technology, & Human Values, 41(1), 50–76. https://doi.org/10.1177/0162243915598056 Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press. http://nyupress.org/books/9781479837243/ Ochigame, R. (2019). The invention of “Ethical AI”: How big tech manipulates academia to avoid regulation. The Intercept. https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy (1st ed.). Crown. Pinch, T. J., & Bijker, W. E. (1984). The social construction of facts and artefacts: Or how the sociology of science and the sociology of technology might benefit each other. Social Studies of Science, 14(3), 399–441. https://doi.org/10.1177/030631284014003004 Radu, R. (2021). Steering the governance of artificial intelligence: National strategies in perspective. Policy and Society, 40(2), 178–193. https://doi.org/10.1080/14494035.2021.1929728 Raji, I. D., Gebru, T., Mitchell, M., Buolamwini, J., Lee, J., & Denton, E. (2020). Saving face: Investigating the ethical concerns of facial recognition auditing. In Proceedings of the AAAI/ACM Conference on AI, ethics, and society (pp. 145–151). https://doi.org/10.1145/3375627.3375820 Rességuier, A., & Rodrigues, R. (2020). AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society, 7(2), 2053951720942541. https://doi.org/10.1177/2053951720942541 Rip, A., & Voß, J.-P. (2013). Umbrella terms as mediators in the governance of emerging science and technology. Science, Technology & Innovation Studies, 9(2), 39–59. Rosert, E., & Sauer, F. (2021). How (not) to stop the killer robots: A comparative analysis of humanitarian disarmament campaign strategies. Contemporary Security Policy, 42(1), 4–29. https://doi.org/10.1080/13523260.2020.1771508 Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and Discrimination: Converting Critical Concerns into Productive Inquiry. http://www-personal.umich.edu/~csandvig/research/Auditing%20Algorithms%20--%20Sandvig%20--%20ICA%202014%20Data%20and%20Discrimination%20Preconference.pdf Seaver, N. (2013). Knowing algorithms. https://digitalsts.net/wp-content/uploads/2019/11/26_digitalSTS_Knowing-Algorithms.pdf Sossin, L., & Smith, C. W. (2003). Hard choices and soft law: Ethical codes, policy guidelines and the role of the courts in regulating government. Alberta Law Review, 40(4), 867. https://doi.org/10.29173/alr1344 van Maanen, G. (2022). AI ethics, ethics washing, and the need to politicize data ethics. Digital Society, 1(2), 9. https://doi.org/10.1007/s44206-022-00013-3 Venturini, T., Ricci, D., Mauri, M., Kimbell, L., & Meunier, A. (2015). Designing controversies and their publics. Design Issues, 31(3), 74–87. https://doi.org/10.1162/DESI_a_00340 Wajcman, J., & Jones, P. K. (2012). Border communication: Media sociology and STS. Media, Culture & Society, 34(6), 673–690. https://doi.org/10.1177/0163443712449496 Woolgar, S. (1985). Why not a sociology of machines? The case of sociology and artificial intelligence. Sociology, 19(4), 557–572. https://doi.org/10.1177/0038038585019004005 Wyatt, S. (2017). Talking about the future: Metaphors of the internet. In N. Brown, B. Rappert, & A. Webster (Eds.), Contested futures: A sociology of prospective techno-science (pp. 109–126). Routledge. https://doi.org/10.4324/9781315259420 Yates, J. (1993). Control through communication: The rise of system in American management. Johns Hopkins University Press. Yolgörmez, C. (2021). Machinic encounters: A relational approach to the sociology of AI. In J. Roberge & M. Castelle (Eds.), The cultural life of machine learning: An incursion into critical AI studies (pp. 143–166). Springer International Publishing. https://doi.org/10.1007/978-3-030-56286-1_5 Zeng, J., Chan, C., & Schäfer, M. S. (2022). Contested Chinese dreams of AI? Public discourse about artificial intelligence on WeChat and People’s Daily Online. Information, Communication & Society, 25(3), 319–340. https://doi.org/10.1080/1369118X.2020.1776372 Ziewitz, M. (2015). Governing algorithms: Myth, mess, and methods. Science, Technology, & Human Values, 41(1), 3–16. https://doi.org/10.1177/0162243915608948

You are not authenticated to view the full text of this chapter or article.

Access options

Get access to the full article by using one of the access options below.

Other access options

Redeem Token

Institutional Login

Log in with Open Athens, Shibboleth, or your institutional credentials

Login via Institutional Access

Personal login

Log in with your Elgar Online account

Login with your Elgar account