Algorithmic Regulation
Latest Publications


TOTAL DOCUMENTS

11
(FIVE YEARS 11)

H-INDEX

2
(FIVE YEARS 2)

Published By Oxford University Press

9780198838494, 9780191874727

2019 ◽  
pp. 248-262
Author(s):  
Lee A Bygrave

This chapter focuses on Articles 22 and 25 of the EU’s General Data Protection Regulation (Regulation 2016/679). It examines how these provisions will impact automated decisional systems. Article 22 gives a person a qualified right ‘not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her’. Article 25 imposes a duty on controllers of personal data to implement technical and organizational measures so that the processing of the data will meet the Regulation’s requirements and otherwise ensure protection of the data subject’s rights. Both sets of rules are aimed squarely at subjecting automated decisional systems to greater accountability. The chapter argues that the rules suffer from significant weaknesses that are likely to hamper their ability to meet this aim.


2019 ◽  
pp. 178-200
Author(s):  
Martin Lodge ◽  
Andrea Mennicken

This chapter focuses on the potentials and challenges posed by the utilization of machine learning algorithms in the regulation of public services, that is services supplied by or on behalf of government to a particular jurisdiction’s community, including healthcare, education, or correctional services. It argues that the widespread enthusiasm for algorithmic regulation hides much deeper differences in worldviews about regulatory approaches, and that advancing the utilization of algorithmic regulation potentially transforms existing mixes of regulatory approaches in non-anticipated ways. It also argues that regulating through algorithmic regulation presents distinct administrative problems in terms of knowledge creation, coordination, and integration, as well as ambiguity over objectives. These challenges for the use of machine learning algorithms in public service algorithmic regulation require renewed attention to questions of the ‘regulation of regulators’.


2019 ◽  
pp. 150-177
Author(s):  
Alex Griffiths

This chapter focuses on one particularly salient application of algorithmic regulation in the public sector—for the purposes of risk assessment to inform decisions about the allocation of enforcement resources, focusing on their accuracy and effectiveness in risk prediction. Drawing on two UK case studies in health care and higher education, it highlights the limited effectiveness of algorithmic regulation in these contexts, drawing attention to the pre-requisites for algorithmic regulation to fully play to its predictive strengths. In so doing, it warns against any premature application of algorithmic regulation to ever-more regulatory domains, serving as a sober reminder that delivering on the claimed promises of algorithmic regulation is anything but simple, straightforward or ‘seamless’.


2019 ◽  
pp. 82-97 ◽  
Author(s):  
Natalia Criado ◽  
Jose M Such

This chapter focuses on a particular normative concern associated with machine decision-making that has attracted considerable attention in policy debate—the problem of bias in algorithmic systems, which gives rise to various forms of ‘digital discrimination’. Digital discrimination entails treating individuals unfairly, unethically, or just differently based on their personal data that is automatically processed by an algorithm. Digital discrimination often reproduces the existing instances of discrimination in the offline world by either inheriting the biases of prior decision-makers, or simply reflecting widespread prejudices in society. The chapter highlights various forms and sources of digital discrimination, pointing to a rich and growing body of technical research seeking to develop technical responses aimed at correcting for, or otherwise removing, these sources of bias.


2019 ◽  
pp. 98-118
Author(s):  
John Danaher

This chapter focuses on one specific application of algorithmic regulation: the use of AI-based personal digital assistants. Rather than being employed as a tool for shaping the behaviour of others, these algorithmic tools are employed by individuals to assist them in their own self-regarding decision-making and goal achievement. The chapter argues that autonomy is under threat in new and interesting ways. It evaluates and disputes the claim that these new threats should not be overestimated because the technology is just an old wolf in a new sheep’s clothing. Finally, it looks at responses to these threats at both the individual and societal level, and argues that although an attitude of ‘helplessness’ should not be encouraged among users of algorithmic tools, there is an important role for legal and regulatory responses to these threats that go beyond what are currently on offer.


2019 ◽  
pp. 49-81 ◽  
Author(s):  
Teresa Scantamburlo ◽  
Andrew Charlesworth ◽  
Nello Cristianini

This chapter discusses how machine learning works and, as a result, how decisions are made by modern intelligent algorithms or ‘classifiers’. It critically evaluates a real-world ‘classifier’, the Harm Assessment Risk Tool (HART)—an algorithmic decision-making tool employed by the Durham police force to inform custody decisions concerning individuals who have been arrested for suspected criminal offences. It evaluates the tool by reference to four normative benchmarks: prediction accuracy, fairness and equality before the law, transparency and accountability, and informational privacy and freedom of expression. It argues that systems which utilize decision-making (or decision-supporting) algorithms, and have the potential to detrimentally affect individual or collective human rights, deserve significantly greater regulatory scrutiny than those systems that use decision-making algorithms to process objects.


2019 ◽  
pp. 121-149 ◽  
Author(s):  
Michael Veale ◽  
Irina Brass

This chapter first explains the types of machine learning systems used in the public sector, detailing the processes and tasks that they aim to support. It then looks at three levels of government—the macro, meso, and the street-level—to map out, analyse, and evaluate how machine learning in the public sector more broadly is framed and standardized across government. It concludes that, while the use of machine learning in the public sector is mostly discussed with regard to its ‘transformative effect’ versus ‘the dynamic conservatism’ characteristic of public bureaucracies that embrace new technological developments, it also raises several concerns about the skills, capacities, processes, and practices that governments currently employ, the forms of which can have value-laden, political consequences.


2019 ◽  
pp. 203-223 ◽  
Author(s):  
Leighton Andrews

Taking the UK as its subject of analysis, this chapter asks how prepared public administrators and political leaders are for the challenges of algorithmic decision-making and artificial intelligence: in other words, what is the state of governance readiness? It begins by examining the literature on governance readiness and administrative capacity. It considers whether this literature is adequate to the task of identifying such capacity issues in a context where the discourse is dominated by the larger technology companies, and raises the question of whether ‘discursive capacity’ is a requirement for governance readiness in this area. It then sets out evidence from empirical research on algorithmic harms and the consideration given to these issues in the political sphere. The chapter then discusses the state of administrative capacity at multiple levels of governance in the UK and concludes by setting out questions to guide further research.


2019 ◽  
pp. 224-247 ◽  
Author(s):  
Jason D Lohr ◽  
Winston J Maxwell ◽  
Peter Watts

Many firms, including those that do not regard of themselves as traditional ‘tech’ firms, consider the prospect of artificial intelligence (AI) both an intriguing possibility and a potential new area of risk for their businesses. The application of existing AI technologies raises significant new issues in some of the most fundamental areas of law, including: ownership and property rights; the creation, allocation and sharing of value; misuse, errors, and responsibility for resulting harm; individual liberty and personal privacy; and economic collusion and monopolies. This chapter first examines how businesses are already managing some of these risks through contract. It then examines some of the considerations involved in public regulation of AI-related risks. It concludes by proposing a four-layer model for thinking about AI regulation in the broad sense.


2019 ◽  
pp. 21-48
Author(s):  
Karen Yeung

This chapter poses a deceptively simple question: what, precisely, are we concerned with when we worry about decision-making by machine? Focusing on fully automated decision-making systems, it suggests that there are three broad sources of ethical anxiety arising from the use of algorithmic decision-making systems: concerns associated with the decision process, concerns about the outputs thereby generated, and concerns associated with the use of such systems to predict and personalize services offered to individuals. The chapter examines each of these concerns, drawing on analytical concepts that are familiar in legal and constitutional scholarship, often used to identify various legal and other regulatory governance mechanisms through which the adverse effects associated with particular actions or activities might be addressed.


Sign in / Sign up

Export Citation Format

Share Document