Constant-time discontinuity map for forward sensitivity analysis to initial conditions: Spurs detection in fractional-N PLL as a case study

Author(s):  
Federico Bizzarri ◽  
Angelo Brambilla ◽  
Alessandro Colombo ◽  
Sergio Callegari
2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Markus J. Ankenbrand ◽  
Liliia Shainberg ◽  
Michael Hock ◽  
David Lohr ◽  
Laura M. Schreiber

Abstract Background Image segmentation is a common task in medical imaging e.g., for volumetry analysis in cardiac MRI. Artificial neural networks are used to automate this task with performance similar to manual operators. However, this performance is only achieved in the narrow tasks networks are trained on. Performance drops dramatically when data characteristics differ from the training set properties. Moreover, neural networks are commonly considered black boxes, because it is hard to understand how they make decisions and why they fail. Therefore, it is also hard to predict whether they will generalize and work well with new data. Here we present a generic method for segmentation model interpretation. Sensitivity analysis is an approach where model input is modified in a controlled manner and the effect of these modifications on the model output is evaluated. This method yields insights into the sensitivity of the model to these alterations and therefore to the importance of certain features on segmentation performance. Results We present an open-source Python library (misas), that facilitates the use of sensitivity analysis with arbitrary data and models. We show that this method is a suitable approach to answer practical questions regarding use and functionality of segmentation models. We demonstrate this in two case studies on cardiac magnetic resonance imaging. The first case study explores the suitability of a published network for use on a public dataset the network has not been trained on. The second case study demonstrates how sensitivity analysis can be used to evaluate the robustness of a newly trained model. Conclusions Sensitivity analysis is a useful tool for deep learning developers as well as users such as clinicians. It extends their toolbox, enabling and improving interpretability of segmentation models. Enhancing our understanding of neural networks through sensitivity analysis also assists in decision making. Although demonstrated only on cardiac magnetic resonance images this approach and software are much more broadly applicable.


2018 ◽  
Vol 225 ◽  
pp. 05002
Author(s):  
Freselam Mulubrhan ◽  
Ainul Akmar Mokhtar ◽  
Masdi Muhammad

A sensitivity analysis is typically conducted to identify how sensitive the output is to changes in the input. In this paper, the use of sensitivity analysis in the fuzzy activity based life cycle costing (LCC) is shown. LCC is the most frequently used economic model for decision making that considers all costs in the life of a system or equipment. The sensitivity analysis is done by varying the interest rate and time 15% and 45%, respectively, to the left and right, and varying 25% of the maintenance and operation cost. It is found that the operation cost and the interest rate give a high impact on the final output of the LCC. A case study of pumps is used in this study.


2010 ◽  
Vol 2010 ◽  
pp. 1-14 ◽  
Author(s):  
Mohammad Ali Badamchizadeh ◽  
Iraj Hassanzadeh ◽  
Mehdi Abedinpour Fallah

Robust nonlinear control of flexible-joint robots requires that the link position, velocity, acceleration, and jerk be available. In this paper, we derive the dynamic model of a nonlinear flexible-joint robot based on the governing Euler-Lagrange equations and propose extended and unscented Kalman filters to estimate the link acceleration and jerk from position and velocity measurements. Both observers are designed for the same model and run with the same covariance matrices under the same initial conditions. A five-bar linkage robot with revolute flexible joints is considered as a case study. Simulation results verify the effectiveness of the proposed filters.


2011 ◽  
Vol 693 ◽  
pp. 3-9 ◽  
Author(s):  
Bruce Gunn ◽  
Yakov Frayman

The scheduling of metal to different casters in a casthouse is a complicated problem, attempting to find the balance between pot-line, crucible carrier, furnace and casting machine capacity. In this paper, a description will be given of a casthouse modelling system designed to test different scenarios for casthouse design and operation. Using discrete-event simulation, the casthouse model incorporates variable arrival times of metal carriers, crucible movements, caster operation and furnace conditions. Each part of the system is individually modelled and synchronised using a series of signals or semaphores. In addition, an easy to operate user interface allows for the modification of key parameters, and analysis of model output. Results from the model will be presented for a case study, which highlights the effect different parameters have on overall casthouse performance. The case study uses past production data from a casthouse to validate the model outputs, with the aim to perform a sensitivity analysis on the overall system. Along with metal preparation times and caster strip-down/setup, the temperature evolution within the furnaces is one key parameter in determining casthouse performance.


2015 ◽  
Vol 2015 ◽  
pp. 1-21 ◽  
Author(s):  
Kese Pontes Freitas Alberton ◽  
André Luís Alberton ◽  
Jimena Andrea Di Maggio ◽  
Vanina Gisela Estrada ◽  
María Soledad Díaz ◽  
...  

This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of theEscherichia coliK-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available.


2021 ◽  
Author(s):  
Andrés Martínez

<p><strong>A METHODOLOGY FOR OPTIMIZING MODELING CONFIGURATION IN THE NUMERICAL MODELING OF OIL CONCENTRATIONS IN UNDERWATER BLOWOUTS: A NORTH SEA CASE STUDY</strong></p><p>Andrés Martínez<sup>a,*</sup>, Ana J. Abascal<sup>a</sup>, Andrés García<sup>a</sup>, Beatriz Pérez-Díaz<sup>a</sup>, Germán Aragón<sup>a</sup>, Raúl Medina<sup>a</sup></p><p><sup>a</sup>IHCantabria - Instituto de Hidráulica Ambiental de la Universidad de Cantabria, Avda. Isabel Torres, 15, 39011 Santander, Spain</p><p><sup>* </sup>Corresponding author: [email protected]</p><p>Underwater oil and gas blowouts are not easy to repair. It may take months before the well is finally capped, releasing large amounts of oil into the marine environment. In addition, persistent oils (crude oil, fuel oil, etc.) break up and dissipate slowly, so they often reach the shore before the cleanup is completed, affecting vasts extension of seas-oceans, just as posing a major threat to marine organisms.</p><p>On account of the above, numerical modeling of underwater blowouts demands great computing power. High-resolution, long-term data bases of wind-ocean currents are needed to be able to properly model the trajectory of the spill at both regional (open sea) and local level (coastline), just as to account for temporal variability. Moreover, a large number of particles, just as a high-resolution grid, are unavoidable in order to ensure accurate modeling of oil concentrations, of utmost importance in risk assessment, so that threshold concentrations can be established (threshold concentrations tell you what level of exposure to a compound could harm marine organisms).</p><p>In this study, an innovative methodology has been accomplished for the purpose of optimizing modeling configuration: number of particles and grid resolution, in the modeling of an underwater blowout, with a view to accurately represent oil concentrations, especially when threshold concentrations are considered. In doing so, statistical analyses (dimensionality reduction and clustering techniques), just as numerical modeling, have been applied.</p><p>It is composed of the following partial steps: (i) classification of i representative clusters of forcing patterns (based on PCA and K-means algorithms) from long-term wind-ocean current hindcast data bases, so that forcing variability in the study area is accounted for; (ii) definition of j modeling scenarios, based on key blowout parameters (oil type, flow rate, etc.) and modeling configuration (number of particles and grid resolution); (iii) Lagrangian trajectory modeling of the combination of the i clusters of forcing patterns and the j modeling scenarios; (iv) sensitivity analysis of the Lagrangian trajectory model output: oil concentrations,  to modeling configuration; (v) finally, as a result, the optimal modeling configuration, given a certain underwater blowout (its key parameters), is provided.</p><p>It has been applied to a hypothetical underwater blowout in the North Sea, one of the world’s most active seas in terms of offshore oil and gas exploration and production. A 5,000 cubic meter per day-flow rate oil spill, flowing from the well over a 15-day period, has been modeled (assuming a 31-day period of subsequent drift for a 46-day modeling). Moreover, threshold concentrations of 0.1, 0.25, 1 and 10 grams per square meter have been applied in the sensitivity analysis. The findings of this study stress the importance of modeling configuration in accurate modeling of oil concentrations, in particular if lower threshold concentrations are considered.</p>


2018 ◽  
Vol 20 (6) ◽  
pp. 1387-1400
Author(s):  
Yiqun Sun ◽  
Weimin Bao ◽  
Peng Jiang ◽  
Xuying Wang ◽  
Chengmin He ◽  
...  

Abstract The dynamic system response curve (DSRC) has its origin in correcting model variables of hydrologic models to improve the accuracy of flood prediction. The DSRC method can lead to unstable performance since the least squares (LS) method, employed by DSRC to estimate the errors, often breaks down for ill-posed problems. A previous study has shown that under certain assumptions the DSRC method can be regarded as a specific form of the numerical solution of the Fredholm equation of the first kind, which is a typical ill-posed problem. This paper introduces the truncated singular value decomposition (TSVD) to propose an improved version of the DSRC method (TSVD-DSRC). The proposed method is extended to correct the initial conditions of a conceptual hydrological model. The usefulness of the proposed method is first demonstrated via a synthetic case study where both the perturbed initial conditions, the true initial conditions, and the corrected initial conditions are precisely known. Then the proposed method is used in two real basins. The results measured by two different criteria clearly demonstrate that correcting the initial conditions of hydrological models has significantly improved the model performance. Similar good results are obtained for the real case study.


Sign in / Sign up

Export Citation Format

Share Document