Reliability-and-Availability Sensitivity Analysis on Convergent Network Infrastructures: Methodology and Case Study

Author(s):  
Kadna Camboim ◽  
Erica Sousa ◽  
Almir Guimaraes ◽  
Jean Araujo ◽  
Paulo Maciel
2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Markus J. Ankenbrand ◽  
Liliia Shainberg ◽  
Michael Hock ◽  
David Lohr ◽  
Laura M. Schreiber

Abstract Background Image segmentation is a common task in medical imaging e.g., for volumetry analysis in cardiac MRI. Artificial neural networks are used to automate this task with performance similar to manual operators. However, this performance is only achieved in the narrow tasks networks are trained on. Performance drops dramatically when data characteristics differ from the training set properties. Moreover, neural networks are commonly considered black boxes, because it is hard to understand how they make decisions and why they fail. Therefore, it is also hard to predict whether they will generalize and work well with new data. Here we present a generic method for segmentation model interpretation. Sensitivity analysis is an approach where model input is modified in a controlled manner and the effect of these modifications on the model output is evaluated. This method yields insights into the sensitivity of the model to these alterations and therefore to the importance of certain features on segmentation performance. Results We present an open-source Python library (misas), that facilitates the use of sensitivity analysis with arbitrary data and models. We show that this method is a suitable approach to answer practical questions regarding use and functionality of segmentation models. We demonstrate this in two case studies on cardiac magnetic resonance imaging. The first case study explores the suitability of a published network for use on a public dataset the network has not been trained on. The second case study demonstrates how sensitivity analysis can be used to evaluate the robustness of a newly trained model. Conclusions Sensitivity analysis is a useful tool for deep learning developers as well as users such as clinicians. It extends their toolbox, enabling and improving interpretability of segmentation models. Enhancing our understanding of neural networks through sensitivity analysis also assists in decision making. Although demonstrated only on cardiac magnetic resonance images this approach and software are much more broadly applicable.


2018 ◽  
Vol 225 ◽  
pp. 05002
Author(s):  
Freselam Mulubrhan ◽  
Ainul Akmar Mokhtar ◽  
Masdi Muhammad

A sensitivity analysis is typically conducted to identify how sensitive the output is to changes in the input. In this paper, the use of sensitivity analysis in the fuzzy activity based life cycle costing (LCC) is shown. LCC is the most frequently used economic model for decision making that considers all costs in the life of a system or equipment. The sensitivity analysis is done by varying the interest rate and time 15% and 45%, respectively, to the left and right, and varying 25% of the maintenance and operation cost. It is found that the operation cost and the interest rate give a high impact on the final output of the LCC. A case study of pumps is used in this study.


2011 ◽  
Vol 693 ◽  
pp. 3-9 ◽  
Author(s):  
Bruce Gunn ◽  
Yakov Frayman

The scheduling of metal to different casters in a casthouse is a complicated problem, attempting to find the balance between pot-line, crucible carrier, furnace and casting machine capacity. In this paper, a description will be given of a casthouse modelling system designed to test different scenarios for casthouse design and operation. Using discrete-event simulation, the casthouse model incorporates variable arrival times of metal carriers, crucible movements, caster operation and furnace conditions. Each part of the system is individually modelled and synchronised using a series of signals or semaphores. In addition, an easy to operate user interface allows for the modification of key parameters, and analysis of model output. Results from the model will be presented for a case study, which highlights the effect different parameters have on overall casthouse performance. The case study uses past production data from a casthouse to validate the model outputs, with the aim to perform a sensitivity analysis on the overall system. Along with metal preparation times and caster strip-down/setup, the temperature evolution within the furnaces is one key parameter in determining casthouse performance.


2021 ◽  
Author(s):  
Andrés Martínez

<p><strong>A METHODOLOGY FOR OPTIMIZING MODELING CONFIGURATION IN THE NUMERICAL MODELING OF OIL CONCENTRATIONS IN UNDERWATER BLOWOUTS: A NORTH SEA CASE STUDY</strong></p><p>Andrés Martínez<sup>a,*</sup>, Ana J. Abascal<sup>a</sup>, Andrés García<sup>a</sup>, Beatriz Pérez-Díaz<sup>a</sup>, Germán Aragón<sup>a</sup>, Raúl Medina<sup>a</sup></p><p><sup>a</sup>IHCantabria - Instituto de Hidráulica Ambiental de la Universidad de Cantabria, Avda. Isabel Torres, 15, 39011 Santander, Spain</p><p><sup>* </sup>Corresponding author: [email protected]</p><p>Underwater oil and gas blowouts are not easy to repair. It may take months before the well is finally capped, releasing large amounts of oil into the marine environment. In addition, persistent oils (crude oil, fuel oil, etc.) break up and dissipate slowly, so they often reach the shore before the cleanup is completed, affecting vasts extension of seas-oceans, just as posing a major threat to marine organisms.</p><p>On account of the above, numerical modeling of underwater blowouts demands great computing power. High-resolution, long-term data bases of wind-ocean currents are needed to be able to properly model the trajectory of the spill at both regional (open sea) and local level (coastline), just as to account for temporal variability. Moreover, a large number of particles, just as a high-resolution grid, are unavoidable in order to ensure accurate modeling of oil concentrations, of utmost importance in risk assessment, so that threshold concentrations can be established (threshold concentrations tell you what level of exposure to a compound could harm marine organisms).</p><p>In this study, an innovative methodology has been accomplished for the purpose of optimizing modeling configuration: number of particles and grid resolution, in the modeling of an underwater blowout, with a view to accurately represent oil concentrations, especially when threshold concentrations are considered. In doing so, statistical analyses (dimensionality reduction and clustering techniques), just as numerical modeling, have been applied.</p><p>It is composed of the following partial steps: (i) classification of i representative clusters of forcing patterns (based on PCA and K-means algorithms) from long-term wind-ocean current hindcast data bases, so that forcing variability in the study area is accounted for; (ii) definition of j modeling scenarios, based on key blowout parameters (oil type, flow rate, etc.) and modeling configuration (number of particles and grid resolution); (iii) Lagrangian trajectory modeling of the combination of the i clusters of forcing patterns and the j modeling scenarios; (iv) sensitivity analysis of the Lagrangian trajectory model output: oil concentrations,  to modeling configuration; (v) finally, as a result, the optimal modeling configuration, given a certain underwater blowout (its key parameters), is provided.</p><p>It has been applied to a hypothetical underwater blowout in the North Sea, one of the world’s most active seas in terms of offshore oil and gas exploration and production. A 5,000 cubic meter per day-flow rate oil spill, flowing from the well over a 15-day period, has been modeled (assuming a 31-day period of subsequent drift for a 46-day modeling). Moreover, threshold concentrations of 0.1, 0.25, 1 and 10 grams per square meter have been applied in the sensitivity analysis. The findings of this study stress the importance of modeling configuration in accurate modeling of oil concentrations, in particular if lower threshold concentrations are considered.</p>


2018 ◽  
Vol 18 (11) ◽  
pp. 3089-3108 ◽  
Author(s):  
Ayse Duha Metin ◽  
Nguyen Viet Dung ◽  
Kai Schröter ◽  
Björn Guse ◽  
Heiko Apel ◽  
...  

Abstract. Flood risk is impacted by a range of physical and socio-economic processes. Hence, the quantification of flood risk ideally considers the complete flood risk chain, from atmospheric processes through catchment and river system processes to damage mechanisms in the affected areas. Although it is generally accepted that a multitude of changes along the risk chain can occur and impact flood risk, there is a lack of knowledge of how and to what extent changes in influencing factors propagate through the chain and finally affect flood risk. To fill this gap, we present a comprehensive sensitivity analysis which considers changes in all risk components, i.e. changes in climate, catchment, river system, land use, assets, and vulnerability. The application of this framework to the mesoscale Mulde catchment in Germany shows that flood risk can vary dramatically as a consequence of plausible change scenarios. It further reveals that components that have not received much attention, such as changes in dike systems or in vulnerability, may outweigh changes in often investigated components, such as climate. Although the specific results are conditional on the case study area and the selected assumptions, they emphasize the need for a broader consideration of potential drivers of change in a comprehensive way. Hence, our approach contributes to a better understanding of how the different risk components influence the overall flood risk.


Sign in / Sign up

Export Citation Format

Share Document