Chapter 10: Trigger events
Restricted access

At the end of the trajectory, as autonomous technologies become more sophisticated, one can imagine a number of possible scenarios, each of them involving different ways of how humans and machines will coexist. However, the trajectory of the coevolution of legal responsibility and autonomous machines needs to be cabined to some extent. The law already treats complex systems, albeit with concepts borrowed heavily from individual legal and moral responsibility. It is only if society feels it is necessary to become finer grained in assigning responsibility, to move from largescale entities who design and manufacture autonomous machines and systems to individual designers and engineers who could be said to have contributed to the defects that led to harms and to individuals in the chain of command who use autonomous machines, that the problems of associational responsibility become more keenly felt. At the same time, some commentators and policymakers are calling for exactly this kind of accountability. This in turn serves as an impetus for changes in legal responsibility, but perhaps more likely, as an impetus for more sophisticated machines. Specific proposals seem inapt given the possible length of the trajectory plotted here, but there are points along the way that merit particular attention: the achievement of general intelligence is the most significant, but before then, we will want to pay careful attention when the first set of cases involving autonomous technologies are decided and when concrete steps are taken to give legal personhood to artificial agents.

You are not authenticated to view the full text of this chapter or article.

Access options

Get access to the full article by using one of the access options below.

Other access options

Redeem Token

Institutional Login

Log in with Open Athens, Shibboleth, or your institutional credentials

Login via Institutional Access

Personal login

Log in with your Elgar Online account

Login with you Elgar account