Artificial intelligence (AI) can provide organizations with valuable insights to improve management decision-making, including in human resource management (HRM). Its use makes decisions faster, more consistent and autonomous, but ethical issues persist. A major concern around AI-augmented HRM is the prospect of reinforcing rather than eliminating bias in decisions that impact on existing and potential employees. Hence, understanding the types of biases, their effects and bias mitigation techniques is crucial for organisations and individuals alike. This chapter explores the risk of bias becoming encoded in datasets and algorithms and the role of HRM and AI developers in addressing this. We first discuss three dominant categories of AI bias: systematic, statistical and computational and human. Then mitigation techniques and their challenges are discussed. Finally the chapter concludes by providing recommendations for actions to mitigate biases while developing AI for HRM.

You are not authenticated to view the full text of this chapter or article.

Access options

Get access to the full article by using one of the access options below.

Other access options

Redeem Token

Institutional Login

Log in with Open Athens, Shibboleth, or your institutional credentials

Login via Institutional Access

Personal login

Log in with your Elgar Online account

Login with your Elgar account
Handbook