Are Public Bureaucracies Supposed to Be High Reliability Organizations?

2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Wolfgang Seibel

This article addresses the question of to what extent conventional theories of high reliability organizations and normal accidents theory are applicable to public bureaucracy. Empirical evidence suggests precisely this. Relevant cases are, for instance, collapsing buildings and bridges due to insufficient supervision of engineering by the relevant authorities, infants dying at the hands of their own parents due to misperceptions and neglect on the part of child protection agencies, uninterrupted serial killings due to a lack of coordination among police services, or improper planning and risk assessment in the preparation of mass events such as soccer games or street parades. The basic argument is that conceptualizing distinct and differentiated causal mechanisms is useful for developing more fine-grained variants of both normal accident theory and high reliability organization theory that take into account standard pathologies of public bureaucracies and inevitable trade-offs connected to their political embeddedness in democratic and rule-of-law-based systems to which belong the tensions between responsiveness and responsibility and between goal attainment and system maintenance. This, the article argues, makes it possible to identify distinct points of intervention at which permissive conditions with the potential to trigger risk-generating human action can be neutralized while the threshold that separates risk-generating human action from actual disaster can be raised to a level that makes disastrous outcomes less probable.

Author(s):  
Holly M. Smith

Consequentialists have long debated (as deontologists should) how to define an agent’s alternatives, given that (a) at any particular time an agent performs numerous “versions” of actions, (b) an agent may perform several independent co-temporal actions, and (c) an agent may perform sequences of actions. We need a robust theory of human action to provide an account of alternatives that avoids previously debated problems. After outlining Alvin Goldman’s action theory (which takes a fine-grained approach to act individuation) and showing that the agent’s alternatives must remain invariant across different normative theories, I address issue (a) by arguing that an alternative for an agent at a time is an entire “act tree” performable by her, rather than any individual act token. I argue further that both tokens and trees must possess moral properties, and I suggest principles governing how these are inherited among trees and tokens. These proposals open a path for future work addressing issues (b) and (c).


Author(s):  
Michèle Rieth ◽  
Vera Hagemann

ZusammenfassungBasierend auf einer Arbeitsfeldbetrachtung im Bereich der Flugsicherung in Österreich und der Schweiz liefert dieser Artikel der Zeitschrift Gruppe. Interaktion. Organisation. (GIO) einen Überblick über automatisierungsbedingte Veränderungen und die daraus resultierenden neuen Kompetenzanforderungen an die Beschäftigten im Hochverantwortungsbereich. Bestehende Tätigkeitsstrukturen und Arbeitsrollen verändern sich infolge zunehmender Automatisierung grundlegend, sodass Organisationen neuen Herausforderungen gegenüberstehen und sich neue Kompetenzanforderungen an Mitarbeitende ergeben. Auf Grundlage von 9 problemzentrierten Interviews mit Fluglotsen sowie 4 problemzentrierten Interviews mit Piloten werden die Veränderungen infolge zunehmender Automatisierung und die daraus resultierenden neuen Kompetenzanforderungen an die Beschäftigten in einer High Reliability Organization dargestellt. Dieser Organisationskontext blieb bisher in der wissenschaftlichen Debatte um neue Kompetenzen infolge von Automatisierung weitestgehend unberücksichtigt. Die Ergebnisse deuten darauf hin, dass der Mensch in High Reliability Organizations durch Technik zwar entlastet und unterstützt werden kann, aber nicht zu ersetzen ist. Die Rolle des Menschen wird im Sinne eines Systemüberwachenden passiver, wodurch die Gefahr eines Fähigkeitsverlustes resultiert und der eigene Einfluss der Beschäftigten abnimmt. Ferner scheinen die Anforderungen, denen sie sich infolge zunehmender Automatisierung gegenüberstehen sehen, zuzunehmen, was in einem Spannungsfeld zu ihrer passiven Rolle zu stehen scheint. Die Erkenntnisse werden diskutiert und praktische Implikationen für das Kompetenzmanagement und die Arbeitsgestaltung zur Minimierung der identifizierten restriktiven Arbeitsbedingungen abgeleitet.


Author(s):  
Christopher Nemeth ◽  
Richard Cook

System performance in healthcare pivots on the ability to match demand for care with the resources that are needed to provide it. High reliability is desirable in organizations that perform inherently hazardous, highly technical tasks. However, healthcare's high variability, diversity, partition between workers and managers, and production pressure make it difficult to employ essential aspects of high reliability organizations (HROs) such as redundancy and extensive training. A different approach is needed to understand the nature of healthcare systems and their ability to perform and survive under duress; in other words, to be resilient. The recent evolution of resilience engineering affords the opportunity to configure healthcare systems so that they are adaptable and can foresee challenges that threaten their mission. Information technology (IT) in particular can enable healthcare, as a service sector, to adapt successfully, as long as it is based on cognitive systems engineering approaches to achieve resilient performance.


Risk Analysis ◽  
2006 ◽  
Vol 26 (5) ◽  
pp. 1123-1138 ◽  
Author(s):  
Sue Cox ◽  
Bethan Jones ◽  
David Collinson

2017 ◽  
Author(s):  
Etienne P. LeBel ◽  
Derek Michael Berger ◽  
Lorne Campbell ◽  
Timothy Loving

Finkel, Eastwick, and Reis (2016; FER2016) argued the post-2011 methodological reform movement has focused narrowly on replicability, neglecting other essential goals of research. We agree multiple scientific goals are essential, but argue, however, a more fine-grained language, conceptualization, and approach to replication is needed to accomplish these goals. Replication is the general empirical mechanism for testing and falsifying theory. Sufficiently methodologically similar replications, also known as direct replications, test the basic existence of phenomena and ensure cumulative progress is possible a priori. In contrast, increasingly methodologically dissimilar replications, also known as conceptual replications, test the relevance of auxiliary hypotheses (e.g., manipulation and measurement issues, contextual factors) required to productively investigate validity and generalizability. Without prioritizing replicability, a field is not empirically falsifiable. We also disagree with FER2016’s position that “bigger samples are generally better, but … that very large samples could have the downside of commandeering resources that would have been better invested in other studies” (abstract). We identify problematic assumptions involved in FER2016’s modifications of our original research-economic model, and present an improved model that quantifies when (and whether) it is reasonable to worry that increasing statistical power will engender potential trade-offs. Sufficiently-powering studies (i.e., >80%) maximizes both research efficiency and confidence in the literature (research quality). Given we are in agreement with FER2016 on all key open science points, we are eager to start seeing the accelerated rate of cumulative knowledge development of social psychological phenomena such a sufficiently transparent, powered, and falsifiable approach will generate.


Sign in / Sign up

Export Citation Format

Share Document