automated decision making
Recently Published Documents


TOTAL DOCUMENTS

255
(FIVE YEARS 167)

H-INDEX

12
(FIVE YEARS 6)

2022 ◽  
Vol 17 (1) ◽  
pp. 72-85
Author(s):  
Ronan Hamon ◽  
Henrik Junklewitz ◽  
Ignacio Sanchez ◽  
Gianclaudio Malgieri ◽  
Paul De Hert

2022 ◽  
pp. 1-25
Author(s):  
Paolo Cavaliere ◽  
Graziella Romeo

Abstract Under what conditions can artificial intelligence contribute to political processes without undermining their legitimacy? Thanks to the ever-growing availability of data and the increasing power of decision-making algorithms, the future of political institutions is unlikely to be anything similar to what we have known throughout the last century, possibly with parliaments deprived of their traditional authority and public decision-making processes largely unaccountable. This paper discusses and challenges these concerns by suggesting a theoretical framework under which algorithmic decision-making is compatible with democracy and, most relevantly, can offer a viable solution to counter the rise of populist rhetoric in the governance arena. Such a framework is based on three pillars: (1) understanding the civic issues that are subjected to automated decision-making; (2) controlling the issues that are assigned to AI; and (3) evaluating and challenging the outputs of algorithmic decision-making.


2022 ◽  
pp. 16-23
Author(s):  
Ivana Bartoletti ◽  
Lucia Lucchini

As artificial intelligence (AI) is increasingly being deployed in almost all aspects of our daily lives, the discourse around the pervasiveness of algorithmic tools and automated decision-making appears to be almost a trivial one. This chapter investigates limits and opportunities within existing debates and examines the rapidly evolving legal landscape and recent court cases. The authors suggest that a viable approach to fairness, which ultimately remains a choice that organizations have to make, could be rooted in a new measurable and accountable responsible business framework.


2021 ◽  
Vol 13 (13) ◽  
pp. 55-69
Author(s):  
Daniela Wendt Toniazzo ◽  
Tales Schmidke Barbosa ◽  
Regina Linden Ruaro

Automated decision-making can bring great benefits to humanity, and it is undeniable that machines pose a danger to human autonomy, as an individual, and can generate potentially discriminatory mechanisms due to the possibility of perverse manipulation of algorithms. Although the artificial intelligence technologies used in automated decision-making are presented as neutral, they are not, and some are even used for modulations of human behavior obtained with the profile data extraction, building a perfect world of personalized consumption. The present study aims to analyze the concept of automated decision-making and the extension of the scope of the right to explanation in the automated treatment of data in the Brazilian system in comparison with the European system. The right to explanation, one of the imperatives of ethical guidelines for reliable artificial intelligence in automated decision-making, is extremely relevant as a criterion opposed to discriminatory mechanisms and combating the opacity of this type of intelligence. The fact is that everything that can be achieved through a degree of automation deserves a recommendable human explanation. In fact, human supervision must guide all stages of the use of artificial intelligence mechanisms. The method used in the present investigation is the hypothetical-deductive, in the approach, and the comparative, in the procedure. The fact is that every automated decision must be explainable, both in terms of its underlying logic and the rationale for the decision. There is also unreasonable to exclude the human element in the review of the automated decision. The present study will observe, by comparative means, the authorizing requirements of the automated decision and its consequences. Also, in order to achieve the desired result, a comparison will be made of the concept of the right to explanation in the European and Brazilian legal systems. As a result of the present study, it was concluded that the European Union treats the automated decision as a prohibition, while in Brazil there is a right to review the automated decision, failing to guarantee that this review is human. Therefore, there is no legal support in Brazil for the right to explanation.


Sign in / Sign up

Export Citation Format

Share Document