Due to recent advances in computerized analysis and robotics, automated law enforcement has become technically feasible. Unfortunately, laws were not created with automated enforcement in mind and even seemingly simple laws have subtle features that require programmers to make assumptions when encoding them. We demonstrate this ambiguity with an experiment where a group of 52 programmers was assigned the task of automating traffic speed limit enforcement. A late-model vehicle was equipped with a sensor that collected actual vehicle speed over a one-hour commute. Each programmer (without collaboration) wrote a program that computed the number of speed limit violations and issued mock tickets. Despite quantitative data for both vehicle speed and the speed limit, the number of tickets issued varied from none to one per sensor sample above the speed limit. Our results from the experiment highlight the significant deviation in number and type of citations issued, based on legal interpretations and assumptions made by programmers without legal training. These deviations were mitigated, but not eliminated, in one sub-group that was provided with a legally reviewed software design specification, providing insight into ways to automate the law in the future. Automation of legal reasoning seems to be the most effective in contexts where legal conclusions are predictable because little room exists for choice in a given model; that is, they are determinable. Yet this experiment demonstrates that even relatively narrow and straightforward “rules” are problematically indeterminate in practice.
You are not authenticated to view the full text of this chapter or article.
Get access to the full article by using one of the access options below.