Detecting Discrimination Risk in Automated Decision-Making Systems with Balance Measures on Input Data

Author(s):  
Mariachiara Mecati ◽  
Antonio Vetro ◽  
Marco Torchiano
2015 ◽  
Vol 21 (3) ◽  
pp. 699-705
Author(s):  
Dana Kristalova

Abstract War Conflicts in the past twenty years have shown the need to digitalize the battlefield. The commander needs a lot of input data for his right decision. These data are necessary for an automated decision-making process. For the successful planning of routes of movement, whether in peace or combat conditions, it is required to know the terrain and at the same time the use of algorithms for finding the correct paths. The determination of the level of Cross-Country Movement (abbr. CCM), or also the traffic ability of the terrain, is the basis for this right decision. Analysis of the Cross-Country Movement means the assessment of several geographical and tactical, technical and human factors together, i.e. that it is multi-criterial analysis. Finding the optimal path based on specified criteria (security, distance, economic cost, time) is not a trivial matter and therefore testing road conditions in the field is carried out. These tests validate the previous calculations.


2020 ◽  
Vol 11 (1) ◽  
pp. 18-50 ◽  
Author(s):  
Maja BRKAN ◽  
Grégory BONNET

Understanding of the causes and correlations for algorithmic decisions is currently one of the major challenges of computer science, addressed under an umbrella term “explainable AI (XAI)”. Being able to explain an AI-based system may help to make algorithmic decisions more satisfying and acceptable, to better control and update AI-based systems in case of failure, to build more accurate models, and to discover new knowledge directly or indirectly. On the legal side, the question whether the General Data Protection Regulation (GDPR) provides data subjects with the right to explanation in case of automated decision-making has equally been the subject of a heated doctrinal debate. While arguing that the right to explanation in the GDPR should be a result of interpretative analysis of several GDPR provisions jointly, the authors move this debate forward by discussing the technical and legal feasibility of the explanation of algorithmic decisions. Legal limits, in particular the secrecy of algorithms, as well as technical obstacles could potentially obstruct the practical implementation of this right. By adopting an interdisciplinary approach, the authors explore not only whether it is possible to translate the EU legal requirements for an explanation into the actual machine learning decision-making, but also whether those limitations can shape the way the legal right is used in practice.


2021 ◽  
Vol 13 (4) ◽  
pp. 1948
Author(s):  
Qiaoning Zhang ◽  
Xi Jessie Yang ◽  
Lionel P. Robert

Automated vehicles (AV) have the potential to benefit our society. Providing explanations is one approach to facilitating AV trust by decreasing uncertainty about automated decision-making. However, it is not clear whether explanations are equally beneficial for drivers across age groups in terms of trust and anxiety. To examine this, we conducted a mixed-design experiment with 40 participants divided into three age groups (i.e., younger, middle-age, and older). Participants were presented with: (1) no explanation, or (2) explanation given before or (3) after the AV took action, or (4) explanation along with a request for permission to take action. Results highlight both commonalities and differences between age groups. These results have important implications in designing AV explanations and promoting trust.


Sign in / Sign up

Export Citation Format

Share Document