scholarly journals Approaching the human in the loop – legal perspectives on hybrid human/algorithmic decision-making in three contexts

Author(s):  
Therese Enarsson ◽  
Lena Enqvist ◽  
Markus Naarttijärvi
Author(s):  
Hartwig Steusloff ◽  
Michael Decker

Extremely complex systems like the smart grid or autonomous cars need to meet society's high expectations regarding their safe operation. The human designer and operator becomes a “system component” as soon as responsible decision making is needed. Tacit knowledge and other human properties are of crucial relevance for situation-dependent decisions. The uniform modeling of technical systems and humans will benefit from ethical reflection. In this chapter, we describe human action with technical means and ask, on the one hand, for a comprehensive multidisciplinary technology assessment in order to produce supporting knowledge and methods for technical and societal decision making. On the other hand—and here is the focus—we propose a system life cycle approach which integrates the human in the loop and argue that it can be worthwhile to describe humans in a technical way in order to implement human decision making by means of the use case method. Ethical reflection and even ethically based technical decision making can support the effective control of convergent technology systems.


2014 ◽  
Vol 40 (2) ◽  
pp. 154-166 ◽  
Author(s):  
Michael A. Schumann ◽  
Doron Drusinsky ◽  
James B. Michael ◽  
Duminda Wijesekera

Author(s):  
Riikka Koulu

This article is an examination of human oversight in EU policy for controlling algorithmic systems in automated legal decision making. Despite the shortcomings of human control over complex technical systems, human oversight is advocated as a solution against the risks of increasing reliance on algorithmic tools. For law, human oversight provides an attractive, easily implementable and observable procedural safeguard. However, without awareness of its inherent limitations, human oversight is in danger of becoming a value in itself, an empty procedural shell used as a stand-in justification for algorithmisation but failing to provide protection for fundamental rights. By complementing socio-legal analysis with Science and Technology Studies, critical algorithm studies, organisation studies and human-computer interaction research, the author explores the importance of keeping the human in the loop and asks what the human element at the core of legal decision making is. Through algorithmisation it is made visible how law conceptualises decision making through human actors, personalises legal decision making through the decision-maker’s discretionary power that provides proportionality and common sense, prevents gross miscarriages of justice and establishes the human encounter deemed essential for the feeling of being heard. The analysis demonstrates the necessary human element embedded in legal decision making, against which the meaningfulness of human oversight needs to be examined.


Author(s):  
Tathagata Chakraborti ◽  
Sarath Sreedharan ◽  
Subbarao Kambhampati

In this paper, we provide a comprehensive outline of the different threads of work in Explainable AI Planning (XAIP) that has emerged as a focus area in the last couple of years and contrast that with earlier efforts in the field in terms of techniques, target users, and delivery mechanisms. We hope that the survey will provide guidance to new researchers in automated planning towards the role of explanations in the effective design of human-in-the-loop systems, as well as provide the established researcher with some perspective on the evolution of the exciting world of explainable planning.


2008 ◽  
pp. 856-874
Author(s):  
Jonathan P. Caulkins ◽  
Erica Layne Morrison ◽  
Timothy Weidemann

Spreadsheets are both ubiquitous and error-prone, but there is less evidence concerning whether spreadsheet errors frequently lead to bad decisions. We interviewed forty-five executives and senior managers / analysts in the private, public, and non-profit sectors about their experiences with spreadsheet errors and quality control. Almost all report spreadsheet errors are common. Most can report instances in which errors directly led to losses or bad decisions, but opinions differ as to whether the consequences of spreadsheet errors are severe. Quality control procedures are in most cases informal. A significant minority of respondents believe such ad hoc processes are sufficient because the “human in the loop” can detect any gross errors. Others thought more formal spreadsheet quality control could be beneficial.


Author(s):  
Kawa Nazemi ◽  
Dirk Burkhardt ◽  
Alexander Kock

AbstractThe awareness of emerging trends is essential for strategic decision making because technological trends can affect a firm’s competitiveness and market position. The rise of artificial intelligence methods allows gathering new insights and may support these decision-making processes. However, it is essential to keep the human in the loop of these complex analytical tasks, which, often lack an appropriate interaction design. Including special interactive designs for technology and innovation management is therefore essential for successfully analyzing emerging trends and using this information for strategic decision making. A combination of information visualization, trend mining and interaction design can support human users to explore, detect, and identify such trends. This paper enhances and extends a previously published first approach for integrating, enriching, mining, analyzing, identifying, and visualizing emerging trends for technology and innovation management. We introduce a novel interaction design by investigating the main ideas from technology and innovation management and enable a more appropriate interaction approach for technology foresight and innovation detection.


Author(s):  
Tathagata Chakraborti ◽  
Kshitij P. Fadnis ◽  
Kartik Talamadupula ◽  
Mishal Dholakia ◽  
Biplav Srivastava ◽  
...  

In this demonstration, we report on the visualization capabilities of an Explainable AI Planning (XAIP) agent that can support human-in-the-loop decision-making. Imposing transparency and explainability requirements on such agents is crucial for establishing human trust and common ground with an end-to-end automated planning system. Visualizing the agent's internal decision making processes is a crucial step towards achieving this. This may include externalizing the "brain" of the agent: starting from its sensory inputs, to progressively higher order decisions made by it in order to drive its planning components. We demonstrate these functionalities in the context of a smart assistant in the Cognitive Environments Laboratory at IBM's T.J. Watson Research Center.


Author(s):  
Jayde M. King ◽  
Yolanda Ortiz ◽  
Thomas Guinn ◽  
John Lanicci ◽  
Beth L. Blickensderfer ◽  
...  

The General Aviation (GA) community accounts for the majority of weather related aviation accidents and incidents. Interpreting and understanding weather products is crucial to hazardous weather avoidance, and previous studies have indicated that improving usability of weather products can improve pilot decision making. The Aviation Weather Center offers two broad types of graphical weather products for assessing icing, turbulence and flight category. These are traditional human-in-the-loop products (G-AIRMETs Ice, Tango, and Sierra) and fully-automated products (CIP/FIP, GTG, and CVA). This study assessed and compared pilots’ understanding of the fully-automated products in comparison to the human-in-loop products. Participants (n=131) completed a set of weather product interpretation questions. A series of mixed ANOVAs were conducted to analyze the effects of pilot certificate and/or rating (Student, Private, Private with Instrument, Commercial with Instrument) and product generation (traditional vs. automated) on product interpretation score. Results indicated that, despite product generation, pilots performed similarly on the icing and ceiling/visibility products, but performed significantly better on the new fully automated turbulence product (GTG) than when using the traditional turbulence product (AIRMET Tango). Usability and training implications are discussed.


2021 ◽  
Author(s):  
Zohreh Shams ◽  
Botty Dimanov ◽  
Sumaiyah Kola ◽  
Nikola Simidjievski ◽  
Helena Andres Terre ◽  
...  

AbstractDeep learning models are receiving increasing attention in clinical decision-making, however the lack of interpretability and explainability impedes their deployment in day-to-day clinical practice. We propose REM, an interpretable and explainable methodology for extracting rules from deep neural networks and combining them with other data-driven and knowledge-driven rules. This allows integrating machine learning and reasoning for investigating applied and basic biological research questions. We evaluate the utility of REM on the predictive tasks of classifying histological and immunohistochemical breast cancer subtypes from genotype and phenotype data. We demonstrate that REM efficiently extracts accurate, comprehensible and, biologically relevant rulesets from deep neural networks that can be readily integrated with rulesets obtained from tree-based approaches. REM provides explanation facilities for predictions and enables the clinicians to validate and calibrate the extracted rulesets with their domain knowledge. With these functionalities, REM caters for a novel and direct human-in-the-loop approach in clinical decision making.


Sign in / Sign up

Export Citation Format

Share Document