requisite variety
Recently Published Documents


TOTAL DOCUMENTS

99
(FIVE YEARS 17)

H-INDEX

14
(FIVE YEARS 2)

Author(s):  
Mikhail V Chester ◽  
Braden Allenby

Abstract Infrastructure systems must change to match the growing complexity of the environments they operate in. Yet the models of governance and the core technologies they rely on are structured around models of relative long-term stability that appear increasingly insufficient and even problematic. As the environments in which infrastructure function become more complex, infrastructure systems must adapt to develop a repertoire of responses sufficient to respond to the increasing variety of conditions and challenges. Whereas in the past infrastructure leadership and system design has emphasized organization strategies that primarily focus on exploitation (e.g., efficiency and production, amenable to conditions of stability), in the future they must create space for exploration, the innovation of what the organization is and does. They will need to create the abilities to maintain themselves in the face of growing complexity by creating the knowledge, processes, and technologies necessary to engage environment complexity. We refer to this capacity as infrastructure autopoiesis. In doing so infrastructure organizations should focus on four key tenets. First, a shift to sustained adaptation – perpetual change in the face of destabilizing conditions often marked by uncertainty – and away from rigid processes and technologies is necessary. Second, infrastructure organizations should pursue restructuring their bureaucracies to distribute more resources and decisionmaking capacity horizontally, across the organization’s hierarchy. Third, they should build capacity for horizon scanning, the process of systematically searching the environment for opportunities and threats. Fourth, they should emphasize loose fit design, the flexibility of assets to pivot function as the environment changes. The inability to engage with complexity can be expected to result in a decoupling between what our infrastructure systems can do and what we need them to do, and autopoietic capabilities may help close this gap by creating the conditions for a sufficient repertoire to emerge.


2021 ◽  
pp. 026839622098853
Author(s):  
Jacob L Cybulski ◽  
Rens Scheepers

The field of data science emerged in recent years, building on advances in computational statistics, machine learning, artificial intelligence, and big data. Modern organizations are immersed in data and are turning toward data science to address a variety of business problems. While numerous complex problems in science have become solvable through data science, not all scientific solutions are equally applicable to business. Many data-intensive business problems are situated in complex socio-political and behavioral contexts that still elude commonly used scientific methods. To what extent can such problems be addressed through data science? Does data science have any inherent blind spots in this regard? What types of business problems are likely to be addressed by data science in the near future, which will not, and why? We develop a conceptual framework to inform the application of data science in business. The framework draws on an extensive review of data science literature across four domains: data, method, interfaces, and cognition. We draw on Ashby’s Law of Requisite Variety as theoretical principle. We conclude that data-scientific advances across the four domains, in aggregate, could constitute requisite variety for particular types of business problems. This explains why such problems can be fully or only partially addressed, solved, or automated through data science. We distinguish between situations that can be improved due to cross-domain compensatory effects, and problems where data science, at best, only contributes merely to better understanding of complex phenomena.


2021 ◽  
pp. 009539972098543
Author(s):  
Ahmed S. Alojairi

This study examines conflict that can co-determine the effectiveness of nonprofit organization performance. Based on Ashby’s law of requisite variety, interorganizational conflict is defined in terms of a lack of fit between input variety and variety-handling capabilities. The calculated organizational interaction effectiveness (IE) ratio of 2.04 is used to determine the quality of interactions. “Flexibility” is the dominant category for helpful incidents (49.03%). Within non-helpful incidents (45.67%), however, “Unreliability” is the dominant category. This major source of conflict commonly produces an imbalance between flexibility and reliability as manifest by a mismatch between input variety and variety-handling capabilities.


2021 ◽  
Vol 21 (1) ◽  
pp. 87-119
Author(s):  
Jeffrey Ford ◽  
Laurie Ford ◽  
Beth Polin
Keyword(s):  

2020 ◽  
Author(s):  
Ernie Chang ◽  
Kenneth A. Moselle ◽  
Ashlin Richardson

ABSTRACTThe agent-based model CovidSIMVL (github.com/ecsendmail/MultiverseContagion) is employed in this paper to delineate different network structures of transmission chains in simulated COVID-19 epidemics, where initial parameters are set to approximate spread from a single transmission source, and R0ranges between 1.5 and 2.5.The resulting Transmission Trees are characterized by breadth, depth and generations needed to reach a target of 50% infected from a starting population of 100, or self-extinction prior to reaching that target. Metrics reflecting efficiency of an epidemic relate closely to topology of the trees.It can be shown that the notion of superspreading individuals may be a statistical artefact of Transmission Tree growth, while superspreader events can be readily simulated with appropriate parameter settings. The potential use of contact tracing data to identify chain length and shared paths is explored as a measure of epidemic progression. This characterization of epidemics in terms of topological characteristics of Transmission Trees may complement equation-based models that work from rates of infection. By constructing measures of efficiency of spread based on Transmission Tree topology and distribution, rather than rates of infection over time, the agent-based approach may provide a method to characterize and project risks associated with collections of transmission events, most notably at relatively early epidemic stages, when rates are low and equation-based approaches are challenged in their capacity to describe or predict.MOTIVATION – MODELS KEYED TO CONTEMPLATED DECISIONSOutcomes are altered by changing the processes that determine them. If we wish to alter contagion-based spread of infection as reflected in curves that characterize changes in transmission rates over time, we must intervene at the level of the processes that are directly involved in preventing viral spread. If we are going to employ models to evaluate different candidate arrays of localized preventive policies, those models must be posed at the same level of granularity as the entities (people enacting processes) to which preventive measures will be applied. As well, the models must be able to represent the transmission-relevant dynamics of the systems to which policies could be applied. Further, the parameters that govern dynamics within the models must embody the actions that are prescribed/proscribed by the preventive measures that are contemplated. If all of those conditions are met, then at a formal or structural level, the models are conformant with the provisions of the Law of Requisite Variety1 or the restated version of that law – the good regulator theorem.2On a more logistical or practical level, the models must yield summary measures that are responsive to changes in key parameters, highlight the dynamics, quantify outcomes associated with the dynamics, and communicate that information in a form that can be understood correctly by parties who are adjudicating on policy options.If the models meet formal/structural requirements regarding requisite variety, and the parameters have a plausible interpretation in relationship to real-world situations, and the metrics do not overly-distort the data contents that they summarize, then the models provide information that is directly relevant to decision-making processes. Models that meet these requirements will minimize the gap that separates models from decisions, a gap that will otherwise be filled by considerations other than the data used to create the models (for equation-based models) or the data generated by the simulations.In this work, we present an agent-based model that targets information requirements of decision-makers who are setting policy at a local level, or translate population level directives to local entities and operations. We employ an agent-based modeling approach, which enables us to generate simulations that respond directly to the requirements of the good regulator theorem. Transmission events take place within a spatio-temporal frame of reference in this model, and rates are not conditioned by a reproduction rate (R0) that is specified a priori. Events are a function of movement and proximity. To summarize dynamics and associated outcomes of simulated epidemics, we employ metrics reflecting topological structure of transmission chains, and distributions of those structures. These measures point directly to dynamic features of simulated outbreaks, they operationalize the “efficiency” construct, and they are responsive to changes in parameters that govern dynamics of the simulations.


Systems ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 53
Author(s):  
Mick Ashby

This paper combines the good regulator theorem with the law of requisite variety and seven other requisites that are necessary and sufficient for a cybernetic regulator to be effective and ethical. The ethical regulator theorem provides a basis for systematically evaluating and improving the adequacy of existing or proposed designs for systems that make decisions that can have ethical consequences; regardless of whether the regulators are humans, machines, cyberanthropic hybrids, organizations, or government institutions. The theorem is used to define an ethical design process that has potentially far-reaching implications for society. A six-level framework is proposed for classifying cybernetic and superintelligent systems, which highlights the existence of a possibility-space bifurcation in our future time-line. The implementation of “super-ethical” systems is identified as an urgent imperative for humanity to avoid the danger that superintelligent machines might lead to a technological dystopia. It is proposed to define third-order cybernetics as the cybernetics of ethical systems. Concrete actions, a grand challenge, and a vision of a super-ethical society are proposed to help steer the future of the human race and our wonderful planet towards a realistically achievable minimum viable cyberanthropic utopia.


2020 ◽  
pp. 017084062094455 ◽  
Author(s):  
Konstantinos Poulis ◽  
Efthimios Poulis ◽  
Paul Jackson

Alignment of organizations with external imperatives is seen as a sine qua non of proper organizing and strategizing by many fit and complexity scholars. Any deviation from this management mantra engenders organizational decline and, ultimately, mortality. We put this axiomatic principle under empirical scrutiny and use the law of requisite variety as our organizing principle to do so. The law is an iconic cornerstone of this matching contingency logic and it has served to legitimize a wide range of fit decisions in, e.g., leadership, organizational learning and corporate governance. Inspired by organizational vignettes inhabiting antithetical complexity regimes, we introduce a novel concept, which we label ‘agentic misfit’. In this way, we deconstruct deterministic assumptions related to environmental fittingness, we challenge teleological orientations in the fit literature, and we flesh out the viability of non-matching human agency amid complexity.


2020 ◽  
Vol 10 (13) ◽  
pp. 4442 ◽  
Author(s):  
Susana Suarez-Fernandez de Miranda ◽  
Francisco Aguayo-González ◽  
Jorge Salguero-Gómez ◽  
María Jesús Ávila-Gutiérrez

Engineering 4.0 environments are characterised by the digitisation, virtualisation, and connectivity of products, processes, and facilities composed of reconfigurable and adaptive socio-technical cyber-physical manufacturing systems (SCMS), in which Operator 4.0 works in real time in VUCA (volatile, uncertain, complex and ambiguous) contexts and markets. This situation gives rise to the interest in developing a framework for the conception of SCMS that allows the integration of the human factor, management, training, and development of the competencies of Operator 4.0 as fundamental aspects of the aforementioned system. The present paper is focused on answering how to conceive the adaptive manufacturing systems of Industry 4.0 through the operation, growth, and development of human talent in VUCA contexts. With this objective, exploratory research is carried, out whose contribution is specified in a framework called Design for the Human Factor in Industry 4.0 (DfHFinI4.0). From among the conceptual frameworks employed therein, the connectivist paradigm, Ashby’s law of requisite variety and Vigotsky’s activity theory are taken into consideration, in order to enable the affective-cognitive and timeless integration of the human factor within the SCMS. DfHFinI4.0 can be integrated into the life cycle engineering of the enterprise reference architectures, thereby obtaining manufacturing systems for Industry 4.0 focused on the human factor. The suggested framework is illustrated as a case study for the Purdue Enterprise Reference Architecture (PERA) methodology, which transforms it into PERA 4.0.


Author(s):  
Henry Linger ◽  
Helen Hasan

The exponential growth of the Internet since the mid-1990s has greatly expanded the capacity of people everywhere to interconnect and engage through digital technologies. As a complex adaptive system of systems, the Internet has extended the range and complexity of phenomena of interest to Information Systems (IS) scholars. This is both an exciting opportunity and a challenge which we explore in this paper by revisiting the Intellectual Structures Framework (Hirshheim et al. 1996) which attempted to make sense of the fragmented adhocracy of IS, before the expansion and penetration of the Internet. We suggest that the IS adhocracy, with its multi-disciplinary and systems-oriented nature, gives IS researchers the requisite variety to contend with the increasingly diverse digital ecologies of IS-enabled human activities that have emerged in the ensuing two decades. Based on relevant research over these two decades we present a revised framework that (1) reflects the complexities of contemporary IS phenomena and (2) can act as an instrument for analysing such phenomena across a spectrum of human activities. We justify the form and content of the Revised Intellectual Structures Framework, providing examples of its application in IS research using appropriate research methods and techniques. We argue that our revisions to the original framework provides individuals, organisations, and societies with a conceptual lens that is necessary to better address the challenges and opportunities posed by the complexities of contemporary digital ecologies.


Sign in / Sign up

Export Citation Format

Share Document