high reliability theory
Recently Published Documents


TOTAL DOCUMENTS

16
(FIVE YEARS 3)

H-INDEX

6
(FIVE YEARS 0)

Philosophies ◽  
2021 ◽  
Vol 6 (3) ◽  
pp. 53
Author(s):  
Robert Williams ◽  
Roman Yampolskiy

As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based on normal accident theory, high reliability theory, and open systems theory, we create a framework for understanding the risks associated with AI applications. This framework is designed to direct attention to pertinent system properties without requiring unwieldy amounts of accuracy. In addition, we also use AI safety principles to quantify the unique risks of increased intelligence and human-like qualities in AI. Together, these two fields give a more complete picture of the risks of contemporary AI. By focusing on system properties near accidents instead of seeking a root cause of accidents, we identify where attention should be paid to safety for current generation AI systems.


Author(s):  
Manikam Pillay ◽  
Andrew Enya ◽  
Emmanuel Bannor Boateng

A growing body of peer-reviewed studies demonstrate the importance of high-reliability organisations and collective mindfulness in improving healthcare safety. However, limited attention has been devoted to developing a common set of characteristics, dimensions, indicators and instruments for measuring collective mindfulness. This can limit its operationalisation and ability to benchmark. This protocol outlines the key procedures that will be used to conduct a scoping literature review, in order to summarise key definitions; identify theoretical underpinnings, dimensions, measures and instruments; and develop a theoretical model to advance research and practice. Specifically, a five-step process and the Preferred Reporting Instruments for Systematic and Meta-Analyses will be used to search, screen and select literature published in five electronic databases. Keywords will include a combination of ‘high-reliability organisations’, high-reliability theory’ with ‘health care’, ‘patient safety’, ‘medical errors’, ‘medical mistakes’, ‘medication error’. A double-blind process will be used for searching, screening and selection of abstracts and full-articles, and inter-observer agreement assessed using Cohen’s kappa.


2017 ◽  
Vol 19 (5) ◽  
pp. 244-251
Author(s):  
Evonne T Curran

This outbreak column uses the Health Protection Scotland (HPS) Outbreak Process and Algorithm to examine and reflect on a published outbreak report. The report involved an extensively drug-resistant Acinetobacter baumannii in an oncology unit. High-reliability theory is then used to reflect on how the outbreak was managed and consider how best to improve local outbreak prevention, preparedness, detection and management. The conclusion of this exercise is that if the possibility of an era of untreatable infections caused by antibiotic-resistant organisms is to be significantly postponed, Infection Prevention and Control Teams must improve their ability to get others to prevent cross-transmission in the absence of recognised risks.


2016 ◽  
Vol 73 (6) ◽  
pp. 694-702 ◽  
Author(s):  
Stephen M. Shortell

This commentary highights the key arguments and contributions of institutional thoery, transaction cost economics (TCE) theory, high reliability theory, and organizational learning theory to understanding the development and evolution of Accountable Care Organizations (ACOs). Institutional theory and TCE theory primarily emphasize the external influences shaping ACOs while high reliability theory and organizational learning theory underscore the internal fctors influencing ACO perfromance. A framework based on Implementation Science is proposed to conside the multiple perspectives on ACOs and, in particular, their abiity to innovate to achieve desired cost, quality, and population health goals.


2014 ◽  
Author(s):  
J.. Wattie

Abstract This is a study that represents ongoing academic research into the folds of perception, organizational culture and high reliability. In the shadow of persistent industrial failures it is probable that problems with operational safety reside in abnormalities of culture. Such cultural apparitions regularly fuel failure in high risk technologies making innovation rather unreliable. As innovation grows it is worth the effort to investigate further how resilience in the face of eternal socio-technical biases can be improved. Problem solving approaches offer regressive ideas that increase the chances of deviation and the appearace of disasters. The assumption is that resilience can be improved in critical operations using High Reliability Theory (HRT). Moreover HRT is more robust when the new constructive method of Appreciative Inquiry (AI) is applied. This early study shows that existing safety culture in a highly reliable group is positively transformed by AI and makes a more productive organization feasible. Research was conducted from the characteristic insider perspective. A small section of a highly reliable organization was sampled. Using ethnographic methodology feedback from electronic surveying collected personal responses for discussion. While individual interviews proved difficult and the sample group was small there was enough evidence to acknowledge the influence of positive revolution. This study had two major findings a) Using AI methodology stimulates positive, resilient feelings in members and b) members readily used these positive experience to envision a more productive organization. This study can potentially reduce over emphasis on problem solving methods to explain and change the human factors associated with failure. Cultural factors are better studied and modified by positive influence. The study here makes way for more persuasive academic discussion on resilience by constructivist perspectives. High reliability organizations are more sustainably designed on positive principles.


2009 ◽  
Vol 62 (9) ◽  
pp. 1357-1390 ◽  
Author(s):  
Samir Shrivastava ◽  
Karan Sonpar ◽  
Federica Pazzaglia

We resolve the longstanding debate between Normal Accident Theory (NAT) and High-Reliability Theory (HRT) by introducing a temporal dimension. Specifically, we explain that the two theories appear to diverge because they look at the accident phenomenon at different points of time. We, however, note that the debate’s resolution does not address the non-falsifiability problem that both NAT and HRT suffer from. Applying insights from the open systems perspective, we reframe NAT in a manner that helps the theory to address its non-falsifiability problem and factor in the role of humans in accidents. Finally, arguing that open systems theory can account for the conclusions reached by NAT and HRT, we proceed to offer pointers for future research to theoretically and empirically develop an open systems view of accidents.


Sign in / Sign up

Export Citation Format

Share Document