This chapter explores two differing ways in which ‘responsibility’ has been constructed as a conceptual and practical tool for understanding and addressing the implications of AI systems for society. The first is as an ideal, and concretely as a set of goals and practices which bring together concerns about accountability and liability, transparency of process and the integration of principles of beneficence with the practice of AI development and deployment. The second analytical approach is to look at ‘responsibility’ as a construct which does both political and practical work in relation to governance. This chapter engages with the second aspect, which it can be argued operates independently from the first. This view can help us to understand the ways in which AI technologies challenge our current models for understanding technologies’ risks and harms, and can offer an alternative route to governing AI that takes into account a pluralist politics that responds to justice concerns.