Machine Decisions and Human Consequences
This chapter discusses how machine learning works and, as a result, how decisions are made by modern intelligent algorithms or ‘classifiers’. It critically evaluates a real-world ‘classifier’, the Harm Assessment Risk Tool (HART)—an algorithmic decision-making tool employed by the Durham police force to inform custody decisions concerning individuals who have been arrested for suspected criminal offences. It evaluates the tool by reference to four normative benchmarks: prediction accuracy, fairness and equality before the law, transparency and accountability, and informational privacy and freedom of expression. It argues that systems which utilize decision-making (or decision-supporting) algorithms, and have the potential to detrimentally affect individual or collective human rights, deserve significantly greater regulatory scrutiny than those systems that use decision-making algorithms to process objects.