Automated decision tools, which increasingly rely on machine learning (ML), are used in systems that permeate our lives. Examples range from systems for offering credit and employment, to serving advertising. We explore the relationship between generalizability and the division of labor between humans and machines in decision systems. An automated decision tool is generalizable to the extent that it produces outputs that are as correct as the outputs it produced on the data used to create it. The generalizability of a ML model depends on the training, data availability, and the underlying predictability of the outcome that it models. Ultimately, whether a tool’s generalizability is adequate for a particular decision system depends on how it is deployed, usually in conjunction with human adjudicators. Taking generalizability explicitly into account highlights important aspects of decision system design, as well as important normative trade-offs, that might otherwise be missed.
Institutional Login
Log in with Open Athens, Shibboleth, or your institutional credentials
Personal login
Log in with your Elgar Online account