scholarly journals Algorithmic Decision-Making and the Control Problem

2019 ◽  
Vol 29 (4) ◽  
pp. 555-578 ◽  
Author(s):  
John Zerilli ◽  
Alistair Knott ◽  
James Maclaurin ◽  
Colin Gavaghan

AbstractThe danger of human operators devolving responsibility to machines and failing to detect cases where they fail has been recognised for many years by industrial psychologists and engineers studying the human operators of complex machines. We call it “the control problem”, understood as the tendency of the human within a human–machine control loop to become complacent, over-reliant or unduly diffident when faced with the outputs of a reliable autonomous system. While the control problem has been investigated for some time, up to this point its manifestation in machine learning contexts has not received serious attention. This paper aims to fill that gap. We argue that, except in certain special circumstances, algorithmic decision tools should not be used in high-stakes or safety-critical decisions unless the systems concerned are significantly “better than human” in the relevant domain or subdomain of decision-making. More concretely, we recommend three strategies to address the control problem, the most promising of which involves a complementary (and potentially dynamic) coupling between highly proficient algorithmic tools and human agents working alongside one another. We also identify six key principles which all such human–machine systems should reflect in their design. These can serve as a framework both for assessing the viability of any such human–machine system as well as guiding the design and implementation of such systems generally.

Author(s):  
Jacob D. Oury ◽  
Frank E. Ritter

AbstractThe foundational design philosophy of user-centered design (UCD) offers an ideal approach for systems engineers, programmers, designers, and any other stakeholder involved with the design of high-stakes systems with human operators. Furthermore, UCD, as presented here, is tailor-made to meet the unique needs of critical human–machine systems in systems like air traffic control towers, 911 call centers, or NASA’s Mission Control Center. Whenever the operator is a mission-critical component of the system, stakeholders must be able to make informed decisions during the design process, and this book provides the tools necessary to make those decisions.


Author(s):  
Jacob D. Oury ◽  
Frank E. Ritter

AbstractThis chapter presents a high-level overview of how designers of complex systems can address risks to project success associated with operator performance and user-centered design. Operation Centers for remote, autonomous systems rely on an interconnected process involving complex technological systems and human operators. Designers should account for issues at possible points of failure, including the human operators themselves. Compared to other system components, human operators can be error-prone and require different knowledge to design for than engineering components. Operators also typically exhibit a wider range of performance than other system components. We propose the Risk-Driven Incremental Commitment Model as the best guide to decision-making when designing interfaces for high-stakes systems. Designers working with relevant stakeholders must assess where to allocate scarce resources during system development. By knowing the technology, users, and tasks for the proposed system, the designers can make informed decisions to reduce the risk of system failure. This chapter introduces key concepts for informed decision-making when designing operation center systems, presents an example system to ground the material, and provides several broadly applicable design guidelines that support the development of user-centered systems in operation centers.


2021 ◽  
Vol 12 (1) ◽  
pp. 16
Author(s):  
Michael Peeters ◽  
M Kenneth Cor ◽  
Sai Boddu ◽  
Jerry Nesamony

Description of the Problem: Reliability is critical validation evidence on which to base high-stakes decision-making. Many times, one exam in a didactic course may not be acceptably reliable on its own. But how much might multiple exams add when combined together? The Innovation: To improve validation evidence towards high-stakes decision-making, Generalizability Theory (G-Theory) can combine reliabilities from multiple exams into one composite-reliability (G_String IV software). Further, G-Theory decision-studies can illustrate changes in course-grade reliability, depending on the number of exams and exam-items. Critical Analysis: 101 first-year PharmD students took two midterm-exams and one final-exam in a pharmaceutics course. Individually, Exam1 had 50MCQ (KR-20=0.69), Exam2 had 43MCQ (KR-20=0.65), and Exam3 had 67MCQ (KR-20=0.67). After combining exam occasions using G-Theory, the composite-reliability was 0.71 for overall course-grades—better than any exam alone. Remarkably, increased numbers of exam occasions showed fewer items per exam were needed, and fewer items over all exams, to obtain an acceptable composite-reliability. Acceptable reliability could be achieved with different combinations of number of MCQs on each exam and number of exam occasions. Implications: G-Theory provided reliability critical validation evidence towards high-stakes decision-making. Final course-grades appeared quite reliable after combining multiple course exams—though this reliability could and should be improved. Notably, more exam occasions allowed fewer items per exam and fewer items over all the exams. Thus, one added benefit of more exam occasions for educators is developing fewer items per exam and fewer items over all exams.


2008 ◽  
Vol 63 (3) ◽  
pp. 607-608
Author(s):  
Csaba Pléh

ErősFerenc, LénárdKataés BókayAntal(szerk.) Typus Budapestiensis. Tanulmányok a pszichoanalízis budapesti iskolájának történetéről éshatásáról. Thalassa, Budapest, 2008, 447 oldalHargittaiIstván: Doktor DNS. Őszinte beszélgetések James D. Watsonnal. Vince Kiadó, Budapest, 2008, 223 oldalKutrovátzGábor,LángBenedekésZemplénGábor: A tudomány határa. Typotex,Budapest, 2008, 376 oldalEngerl, C. andSinger, W. (eds) Better than conscious? Decision making, the human mind, and implications for institutions . MIT Press, Cambridge, 2008, xiv + 449 oldalKondor, Zsuzsanna: Embedded thinking. Multimedia and the new rationality. Peter Lang, Frankfurt am Main, 2008, xi + 169 oldalSíklakiIstván(szerk.): Szóbeli befolyásolás. I–II. Typotex, Budapest,_n


2020 ◽  
Vol 13 (5) ◽  
pp. 884-892
Author(s):  
Sartaj Ahmad ◽  
Ashutosh Gupta ◽  
Neeraj Kumar Gupta

Background: In recent time, people love online shopping but before any shopping feedbacks or reviews always required. These feedbacks help customers in decision making for buying any product or availing any service. In the country like India this trend of online shopping is increasing very rapidly because awareness and the use of internet which is increasing day by day. As result numbers of customers and their feedbacks are also increasing. It is creating a problem that how to read all reviews manually. So there should be some computerized mechanism that provides customers a summary without spending time in reading feedbacks. Besides big number of reviews another problem is that reviews are not structured. Objective: In this paper, we try to design, implement and compare two algorithms with manual approach for the crossed domain Product’s reviews. Methods: Lexicon based model is used and different types of reviews are tested and analyzed to check the performance of these algorithms. Results: Algorithm based on opinions and feature based opinions are designed, implemented, applied and compared with the manual results and it is found that algorithm # 2 is performing better than algorithm # 1 and near to manual results. Conclusion: Algorithm # 2 is found better on the different product’s reviews and still to be applied on other product’s reviews to enhance its scope. Finally, it will be helpful to automate existing manual process.


Author(s):  
Bahador Bahrami

Evidence for and against the idea that “two heads are better than one” is abundant. This chapter considers the contextual conditions and social norms that predict madness or wisdom of crowds to identify the adaptive value of collective decision-making beyond increased accuracy. Similarity of competence among members of a collective impacts collective accuracy, but interacting individuals often seem to operate under the assumption that they are equally competent even when direct evidence suggest the opposite and dyadic performance suffers. Cross-cultural data from Iran, China, and Denmark support this assumption of similarity (i.e., equality bias) as a sensible heuristic that works most of the time and simplifies social interaction. Crowds often trade off accuracy for other collective benefits such as diffusion of responsibility and reduction of regret. Consequently, two heads are sometimes better than one, but no-one holds the collective accountable, not even for the most disastrous of outcomes.


Synthese ◽  
2021 ◽  
Author(s):  
Ellen Fridland

AbstractThis paper provides an account of the strategic control involved in skilled action. When I discuss strategic control, I have in mind the practical goals, plans, and strategies that skilled agents use in order to specify, structure, and organize their skilled actions, which they have learned through practice. The idea is that skilled agents are better than novices not only at implementing the intentions that they have but also at forming the right intentions. More specifically, skilled agents are able formulate and modify, adjust and adapt their practical intentions in ways that are appropriate, effective, and flexible given their overall goals. Further, to specify the kind of action plans that are involved in strategic control, I’ll rely on empirical evidence concerning mental practice and mental imagery from sports psychology as well as evidence highlighting the systematic differences in the cognitive representations of skills between experts and non-experts. I’ll claim that, together, this evidence suggests that the intentions that structure skilled actions are practical and not theoretical, that is, that they are perceptual and motor and not abstract, amodal, or linguistic. Importantly, despite their grounded nature, these plans are still personal-level, deliberate, rational states. That is, the practical intentions used to specify and structure skilled actions are best conceived of as higher-order, motor-modal structures, which can be manipulated and used by the agent for the purpose of reasoning, deliberation, decision-making and, of course, the actual online structuring and organizing of action.


Author(s):  
Katherine Labonté ◽  
Daniel Lafond ◽  
Aren Hunter ◽  
Heather F. Neyedli ◽  
Sébastien Tremblay

The Cognitive Shadow is a prototype tool intended to support decision making by autonomously modeling human operators’ response pattern and providing online notifications to the operators about the decision they are expected to make in new situations. Since the system can be configured either in a reactive “shadowing” or a proactive “recommendation” mode, this study aimed to determine its most effective mode in terms of human and model accuracy, workload, and trust. Subjects participated in an aircraft threat evaluation simulation without decision support or while using either mode of the Cognitive Shadow. Whereas the recommendation mode had no advantage over the control condition, the shadowing mode led to higher human and model accuracy. These benefits were maintained even when the tool was unexpectedly removed. Neither mode influenced workload, and the initial lower trust rating in the shadowing mode faded quickly, making it the best overall configuration for the cognitive assistant.


2017 ◽  
Vol 2017 ◽  
pp. 1-12 ◽  
Author(s):  
D. Santiago ◽  
E. Slawiñski ◽  
V. Mut

This paper analyzes the stability of a trilateral teleoperation system of a mobile robot. This type of system is nonlinear, time-varying, and delayed and includes a master-slave kinematic dissimilarity. To close the control loop, three P+d controllers are used under a position master/slave velocity strategy. The stability analysis is based on Lyapunov-Krasovskii theory where a functional is proposed and analyzed to get conditions for the control parameters that assure a stable behavior, keeping the synchronism errors bounded. Finally, the theoretical result is verified in practice by means of a simple test, where two human operators both collaboratively and simultaneously drive a 3D simulator of a mobile robot to achieve an established task on a remote shared environment.


Sign in / Sign up

Export Citation Format

Share Document