Chapter 10: Risk perceptions and trust mechanisms related to everyday AI
Restricted access

The increasing ubiquity and rapid advances in AI technology have spurred excitement and optimism and prompted expressions of growing concerns, such as fear of loss of control of AI, concerns about privacy and surveillance, algorithmic biases, technological unemployment, and other ethical concerns. The success of integrating AI into broad social systems critically depends on the public’s trust in and risk perceptions about AI technology. As such, it is critical to understand the sources and underlying mechanisms through which public understandings of AI are constructed. This chapter reviews risk perceptions, privacy concerns, and trust mechanisms related to AI. Theoretical concepts, corresponding empirical findings, and practical implications are critically examined.

You are not authenticated to view the full text of this chapter or article.

Access options

Get access to the full article by using one of the access options below.

Other access options

Redeem Token

Institutional Login

Log in with Open Athens, Shibboleth, or your institutional credentials

Login via Institutional Access

Personal login

Log in with your Elgar Online account

Login with your Elgar account
Edited by