scholarly journals Legal and Technical Feasibility of the GDPR’s Quest for Explanation of Algorithmic Decisions: of Black Boxes, White Boxes and Fata Morganas

2020 ◽  
Vol 11 (1) ◽  
pp. 18-50 ◽  
Author(s):  
Maja BRKAN ◽  
Grégory BONNET

Understanding of the causes and correlations for algorithmic decisions is currently one of the major challenges of computer science, addressed under an umbrella term “explainable AI (XAI)”. Being able to explain an AI-based system may help to make algorithmic decisions more satisfying and acceptable, to better control and update AI-based systems in case of failure, to build more accurate models, and to discover new knowledge directly or indirectly. On the legal side, the question whether the General Data Protection Regulation (GDPR) provides data subjects with the right to explanation in case of automated decision-making has equally been the subject of a heated doctrinal debate. While arguing that the right to explanation in the GDPR should be a result of interpretative analysis of several GDPR provisions jointly, the authors move this debate forward by discussing the technical and legal feasibility of the explanation of algorithmic decisions. Legal limits, in particular the secrecy of algorithms, as well as technical obstacles could potentially obstruct the practical implementation of this right. By adopting an interdisciplinary approach, the authors explore not only whether it is possible to translate the EU legal requirements for an explanation into the actual machine learning decision-making, but also whether those limitations can shape the way the legal right is used in practice.

2021 ◽  
Vol 46 (3-4) ◽  
pp. 321-345
Author(s):  
Robert Grzeszczak ◽  
Joanna Mazur

Abstract The development of automated decision-making technologies creates the threat of de-iuridification: replacement of the legal acts’ provisions with automated, technological solutions. The article examines how selected provisions of the General Data Protection Regulation concerning, among other things, data protection impact assessments, the right to not be subject to automated decision-making, information obligations and the right to access are applied in the Polish national legal order. We focus on the institutional and procedural solutions regarding the involvement of expert bodies and other stakeholders in the process of specification of the norms included in the gdpr and their enforcement. We argue that the example of Poland shows that the solutions adopted in the gdpr do not shift the balance concerning regulatory power in regard to automated decision-making to other stakeholders and as such do not favor of a more participative approach to the regulatory processes.


2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Marta Choroszewicz ◽  
Beata Mäihäniemi

This article uses the sociolegal perspective to address current problems surrounding data protection and the experimental use of automated decision-making systems. This article outlines and discusses the hard laws regarding national adaptations of the European General Data Protection Regulation and other regulations as well as the use of automated decision-making in the public sector in six European countries (Denmark, Sweden, Germany, Finland, France, and the Netherlands). Despite its limitations, the General Data Protection Regulation has impacted the geopolitics of the global data market by empowering citizens and data protection authorities to voice their complaints and conduct investigations regarding data breaches. We draw on the Esping-Andersen welfare state typology to advance our understanding of the different approaches of states to citizens’ data protection and data use for automated decision-making between countries in the Nordic regime and the Conservative-Corporatist regime. Our study clearly indicates a need for additional legislation regarding the use of citizens’ data for automated decision-making and regulation of automated decision-making. Our results also indicate that legislation in Finland, Sweden, and Denmark draws upon the mutual trust between public administrations and citizens and thus offers only general guarantees regarding the use of citizens’ data. In contrast, Germany, France, and the Netherlands have enacted a combination of general and sectoral regulations to protect and restrict citizens’ rights. We also identify some problematic national policy responses to the General Data Protection Regulation that empower governments and related institutions to make citizens accountable to states’ stricter obligations and tougher sanctions. The article contributes to the discussion on the current phase of the developing digital welfare state in Europe and the role of new technologies (i.e., automated decision-making) in this phase. We argue that states and public institutions should play a central role in strengthening the social norms associated with data privacy and protection as well as citizens’ right to social security.


2017 ◽  
Author(s):  
Michael Veale ◽  
Lilian Edwards

Cite as: Michael Veale and Lilian Edwards, 'Clarity, Surprises, and Further Questions in the Article 29 Working Party Draft Guidance on Automated Decision-Making and Profiling' (forthcoming) Computer Law and Security ReviewThe new Article 29 Data Protection Working Party’s draft guidance on automated decision-making and profiling seeks to clarify the European data protection (DP) law’s little-used right to prevent automated decision-making, as well as the provisions around profiling more broadly, in the run-up to the General Data Protection Regulation. In this paper, we analyse these new guidelines in the context of recent scholarly debates and technological concerns. They foray into the less-trodden areas of bias and non-discrimination, the significance of advertising, the nature of “solely” automated decisions, impacts upon groups and the inference of special categories of data — at times, appearing more to be making or extending rules than to be interpreting them. At the same time, they provide only partial clarity — and perhaps even some extra confusion — around both the much discussed “right to an explanation” and the apparent prohibition on significant automated decisions concerning children. The Working Party appear to feel less mandated to adjudicate in these conflicts between the recitals and the enacting articles than to explore altogether new avenues. Nevertheless, the directions they choose to explore are particularly important ones for the future governance of machine learning and artificial intelligence in Europe and beyond.


2018 ◽  
Author(s):  
Margot Kaminski

Many have called for algorithmic accountability: laws governing decision-making by complex algorithms, or AI. The EU’s General Data Protection Regulation (GDPR) now establishes exactly this. The recent debate over the right to explanation (a right to information about individual decisions made by algorithms) has obscured the significant algorithmic accountability regime established by the GDPR. The GDPR’s provisions on algorithmic accountability, which include a right to explanation, have the potential to be broader, stronger, and deeper than the preceding requirements of the Data Protection Directive. This Essay clarifies, largely for a U.S. audience, what the GDPR actually requires, incorporating recently released authoritative guidelines.


Author(s):  
Ciara Staunton

AbstractThe coming into force of the General Data Protection Regulation (GDPR) on 25 May 2018 has brought about considerable changes in how data may collected, stored and used. Biobanks, which require the collection, use and re-use of large quantities of biological samples and data, will be affected by the proposed changes. In seeking to require ‘data protection by design’, the GDPR provides data subjects with certain individual rights. They are, the right to be informed, the right of access, the right to rectification, the right to erasure, the right to restrict processing, the right to data portability, the right to object and rights in relation to automated decision making and profiling.This chapter will consider each of these individual rights in turn and discuss the impact on biobank research. In particular, it will discuss the challenges that are now facing biobanks in upholding the individual rights, the limits of these rights in light of the technical realities of biobanks, and the potential impact that they may have on the collection, sharing, use and re-use of biological data and material.


2020 ◽  
Vol 51 (1) ◽  
pp. 1
Author(s):  
Ella Brownlie

Automated decision-making systems, developed using artificial intelligence and machine learning processes, are being used by companies, organisations and governments with increasing frequency. The purpose of this article is to outline the urgent case for regulating automated decision-making and examine the possible options for regulation. This article will argue that New Zealand's current approach to regulating decision-making is inadequate. It will then analyse art 22 of the European Union's General Data Protection Regulation, concluding that this regime also has significant flaws. Finally, this article will propose an alternative regulatory solution to address the novel challenge posed by automated decision-making. This solution aims to strike a balance between the interests of organisations in capitalising on the benefits of automated decision-making technology and the interests of individuals in ensuring that their right to freedom from discrimination is upheld.


Sign in / Sign up

Export Citation Format

Share Document