scholarly journals Understanding and Avoiding AI Failures: A Practical Guide

Philosophies ◽  
2021 ◽  
Vol 6 (3) ◽  
pp. 53
Author(s):  
Robert Williams ◽  
Roman Yampolskiy

As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based on normal accident theory, high reliability theory, and open systems theory, we create a framework for understanding the risks associated with AI applications. This framework is designed to direct attention to pertinent system properties without requiring unwieldy amounts of accuracy. In addition, we also use AI safety principles to quantify the unique risks of increased intelligence and human-like qualities in AI. Together, these two fields give a more complete picture of the risks of contemporary AI. By focusing on system properties near accidents instead of seeking a root cause of accidents, we identify where attention should be paid to safety for current generation AI systems.

2009 ◽  
Vol 62 (9) ◽  
pp. 1357-1390 ◽  
Author(s):  
Samir Shrivastava ◽  
Karan Sonpar ◽  
Federica Pazzaglia

We resolve the longstanding debate between Normal Accident Theory (NAT) and High-Reliability Theory (HRT) by introducing a temporal dimension. Specifically, we explain that the two theories appear to diverge because they look at the accident phenomenon at different points of time. We, however, note that the debate’s resolution does not address the non-falsifiability problem that both NAT and HRT suffer from. Applying insights from the open systems perspective, we reframe NAT in a manner that helps the theory to address its non-falsifiability problem and factor in the role of humans in accidents. Finally, arguing that open systems theory can account for the conclusions reached by NAT and HRT, we proceed to offer pointers for future research to theoretically and empirically develop an open systems view of accidents.


2014 ◽  
Author(s):  
J.. Wattie

Abstract This is a study that represents ongoing academic research into the folds of perception, organizational culture and high reliability. In the shadow of persistent industrial failures it is probable that problems with operational safety reside in abnormalities of culture. Such cultural apparitions regularly fuel failure in high risk technologies making innovation rather unreliable. As innovation grows it is worth the effort to investigate further how resilience in the face of eternal socio-technical biases can be improved. Problem solving approaches offer regressive ideas that increase the chances of deviation and the appearace of disasters. The assumption is that resilience can be improved in critical operations using High Reliability Theory (HRT). Moreover HRT is more robust when the new constructive method of Appreciative Inquiry (AI) is applied. This early study shows that existing safety culture in a highly reliable group is positively transformed by AI and makes a more productive organization feasible. Research was conducted from the characteristic insider perspective. A small section of a highly reliable organization was sampled. Using ethnographic methodology feedback from electronic surveying collected personal responses for discussion. While individual interviews proved difficult and the sample group was small there was enough evidence to acknowledge the influence of positive revolution. This study had two major findings a) Using AI methodology stimulates positive, resilient feelings in members and b) members readily used these positive experience to envision a more productive organization. This study can potentially reduce over emphasis on problem solving methods to explain and change the human factors associated with failure. Cultural factors are better studied and modified by positive influence. The study here makes way for more persuasive academic discussion on resilience by constructivist perspectives. High reliability organizations are more sustainably designed on positive principles.


2016 ◽  
Vol 73 (6) ◽  
pp. 694-702 ◽  
Author(s):  
Stephen M. Shortell

This commentary highights the key arguments and contributions of institutional thoery, transaction cost economics (TCE) theory, high reliability theory, and organizational learning theory to understanding the development and evolution of Accountable Care Organizations (ACOs). Institutional theory and TCE theory primarily emphasize the external influences shaping ACOs while high reliability theory and organizational learning theory underscore the internal fctors influencing ACO perfromance. A framework based on Implementation Science is proposed to conside the multiple perspectives on ACOs and, in particular, their abiity to innovate to achieve desired cost, quality, and population health goals.


2009 ◽  
Vol 62 (9) ◽  
pp. 1395-1398 ◽  
Author(s):  
Samir Shrivastava ◽  
Karan Sonpar ◽  
Federica Pazzaglia

In his brief commentary, Perrow raises four issues. First, he alludes to how the misuse of bureaucratic power could explain some accidents. Second, he reiterates that normal accidents occur owing to the characteristics inherent in a system, and such accidents, irrespective of whether high reliability practices are followed or not, are inevitable. Third, Perrow asserts that complexity and coupling are independent of time of operation. The time dimension’s irrelevance, he claims, ought to be apparent from his analysis of normal accidents in systems such as the air transport and chemical industry (see Perrow, 1984). Fourth, Perrow implies that High Reliability Theory (HRT) cannot explain the sub-class of accidents that Normal Accident Theory (NAT) concerns itself with. He thus makes a case for retaining NAT alongside other theories and finds little value in our reconciliation. In fact, he finds the reconciliation inappropriate because we supposedly err in implicating time. We respond to the four issues in turn.


Sign in / Sign up

Export Citation Format

Share Document