correlated failures
Recently Published Documents


TOTAL DOCUMENTS

115
(FIVE YEARS 14)

H-INDEX

17
(FIVE YEARS 1)

2021 ◽  
Vol 197 ◽  
pp. 107280
Author(s):  
Javiera Barrera ◽  
Pauline Beaupuits ◽  
Eduardo Moreno ◽  
Rodrigo Moreno ◽  
Francisco D. Muñoz

Author(s):  
Caroline A Johnson ◽  
Allison C Reilly ◽  
Roger Flage ◽  
Seth D Guikema

Knowing the ability of networked infrastructure to maintain operability following a spatially distributed hazard (e.g. an earthquake or a hurricane) is paramount to managing risk and planning for recovery. Leveraging topological properties of the network, along with characteristics of the hazard field, may be an expedient way of predicting network robustness compared to more computationally-intensive simulation methods. Prior work has shown that the topological properties are insightful for predicting robustness, considered here to be measured by the relative size of the largest connected subgraph after failures, especially for networks experiencing random failures. While this does not equate to full engineering-based performance, it does provide an indication of the robustness of the network. In this work, we consider the effect that spatially-correlated failures have on network robustness using only spatial properties of the hazard and topological properties of networks. The results show that the spatial properties of the hazard together with the mean nodal degree, mean clustering coefficient, clustering coefficient standard deviation and path length standard deviation are the most influential factors in characterizing the network robustness. Using the results, recommendations are made for infrastructure management/owners to consider when improving existing systems, or designing new infrastructure. Recommendations include examining the known possible locations of potential hazards in relation to the system and considering the level of redundancy within the system.


2020 ◽  
Vol 13 (8) ◽  
pp. 183
Author(s):  
Viral V. Acharya ◽  
Aaditya M. Iyer ◽  
Rangarajan K. Sundaram

We address the paradox that financial innovations aimed at risk-sharing appear to have made the world riskier. Financial innovations facilitate hedging idiosyncratic risks among agents; however, aggregate risks can be hedged only with liquid assets. When risk-sharing is primitive, agents self-hedge and hold more liquid assets; this buffers aggregate risks, resulting in few correlated failures compared to when there is greater risk sharing. We apply this insight to build a model of a clearinghouse to show that as risk-sharing improves, aggregate liquidity falls but correlated failures rise. Public liquidity injections, for example, in the form of a lender-of-last-resort can reduce this systemic risk ex post, but induce lower ex-ante levels of private liquidity, which can in turn aggravate welfare costs from such injections.


Author(s):  
Bentolhoda Jafary ◽  
Lance Fiondella ◽  
Ping-Chen Chang

Checkpointing is a technique to back up work at periodic intervals so that if computation fails, it will not be necessary to restart from the beginning but will instead be able to restart from the latest checkpoint. Performing checkpointing operations requires time. Therefore, it is necessary to consider the tradeoff between the time to perform checkpointing operations and the time saved when computation restarts at a checkpoint. This article presents a method to model the impact of correlated failures on an application that performs a specified amount of computation and implements checkpointing operations at equidistant periods during this computation. We develop a Markov model and superimpose a correlated life distribution. Two cases are considered. The first assumes that reaching a checkpoint resets the failure distribution. The second allows the probability of failure to progress. We illustrate the approach through a series of examples. The results indicate that correlation can negatively impact checkpointing, necessitating more frequent checkpointing and increasing the total time required, but that the approach can still identify the optimal number of equidistant checkpoints, despite this correlation.


Sign in / Sign up

Export Citation Format

Share Document