scholarly journals Operational Risk Reverse Stress Testing: Optimal Solutions

2021 ◽  
Vol 26 (2) ◽  
pp. 38
Author(s):  
Peter Mitic

Selecting a suitable method to solve a black-box optimization problem that uses noisy data was considered. A targeted stop condition for the function to be optimized, implemented as a stochastic algorithm, makes established Bayesian methods inadmissible. A simple modification was proposed and shown to improve optimization the efficiency considerably. The optimization effectiveness was measured in terms of the mean and standard deviation of the number of function evaluations required to achieve the target. Comparisons with alternative methods showed that the modified Bayesian method and binary search were both performant, but in different ways. In a sequence of identical runs, the former had a lower expected value for the number of runs needed to find an optimal value. The latter had a lower standard deviation for the same sequence of runs. Additionally, we suggested a way to find an approximate solution to the same problem using symbolic computation. Faster results could be obtained at the expense of some impaired accuracy and increased memory requirements.

2008 ◽  
Vol 21 (24) ◽  
pp. 6710-6723 ◽  
Author(s):  
Jason E. Smerdon ◽  
Alexey Kaplan ◽  
Diana Chang

Abstract The regularized expectation maximization (RegEM) method has been used in recent studies to derive climate field reconstructions of Northern Hemisphere temperatures during the last millennium. Original pseudoproxy experiments that tested RegEM [with ridge regression regularization (RegEM-Ridge)] standardized the input data in a way that improved the performance of the reconstruction method, but included data from the reconstruction interval for estimates of the mean and standard deviation of the climate field—information that is not available in real-world reconstruction problems. When standardizations are confined to the calibration interval only, pseudoproxy reconstructions performed with RegEM-Ridge suffer from warm biases and variance losses. Only cursory explanations of this so-called standardization sensitivity of RegEM-Ridge have been published, but they have suggested that the selection of the regularization (ridge) parameter by means of minimizing the generalized cross validation (GCV) function is the source of the effect. The origin of the standardization sensitivity is more thoroughly investigated herein and is shown not to be associated with the selection of the ridge parameter; sets of derived reconstructions reveal that GCV-selected ridge parameters are minimally different for reconstructions standardized either over both the reconstruction and calibration interval or over the calibration interval only. While GCV may select ridge parameters that are different from those that precisely minimize the error in pseudoproxy reconstructions, RegEM reconstructions performed with truly optimized ridge parameters are not significantly different from those that use GCV-selected ridge parameters. The true source of the standardization sensitivity is attributable to the inclusion or exclusion of additional information provided by the reconstruction interval, namely, the mean and standard deviation fields computed for the complete modeled dataset. These fields are significantly different from those for the calibration period alone because of the violation of a standard EM assumption that missing values are missing at random in typical paleoreconstruction problems; climate data are predominantly missing in the preinstrumental period when the mean climate was significantly colder than the mean of the instrumental period. The origin of the standardization sensitivity therefore is not associated specifically with RegEM-Ridge, and more recent attempts to regularize the EM algorithm using truncated total least squares could theoretically also be susceptible to the problems affecting RegEM-Ridge. Nevertheless, the principal failure of RegEM-Ridge arises because of a poor initial estimate of the mean field, and therefore leaves open the possibility that alternative methods may perform better.


Author(s):  
Peter Mitic ◽  

A black-box optimization problem is considered, in which the function to be optimized can only be expressed in terms of a complicated stochastic algorithm that takes a long time to evaluate. The value returned is required to be sufficiently near to a target value, and uses data that has a significant noise component. Bayesian Optimization with an underlying Gaussian Process is used as an optimization solution, and its effectiveness is measured in terms of the number of function evaluations required to attain the target. To improve results, a simple modification of the Gaussian Process ‘Lower Confidence Bound’ (LCB) acquisition function is proposed. The expression used for the confidence bound is squared in order to better comply with the target requirement. With this modification, much improved results compared to random selection methods and to other commonly used acquisition functions are obtained.


1969 ◽  
Vol 14 (9) ◽  
pp. 470-471
Author(s):  
M. DAVID MERRILL
Keyword(s):  

1972 ◽  
Vol 28 (03) ◽  
pp. 447-456 ◽  
Author(s):  
E. A Murphy ◽  
M. E Francis ◽  
J. F Mustard

SummaryThe characteristics of experimental error in measurement of platelet radioactivity have been explored by blind replicate determinations on specimens taken on several days on each of three Walker hounds.Analysis suggests that it is not unreasonable to suppose that error for each sample is normally distributed ; and while there is evidence that the variance is heterogeneous, no systematic relationship has been discovered between the mean and the standard deviation of the determinations on individual samples. Thus, since it would be impracticable for investigators to do replicate determinations as a routine, no improvement over simple unweighted least squares estimation on untransformed data suggests itself.


Author(s):  
A.V. Alekseev

The analysis of the concept, properties and features of heterogeneous redundancy in modern complex ergatic systems, including those included in the situation centers (SC). On the basis of the qualimetric paradigm, the generalized analytical model of quality and optimization of quality by private, group, summary and aggregated quality indicators is justified. Practical ways of realization of the model and methods of optimization of the objects which are a part of SC and them as a whole at the expense of reduction of structural, functional and other types of redundancy under the obligatory condition of non-reduction of the required value of quality are given. On the example of the generalized sampling theorem when choosing the optimal value of the sampling frequency of the real bandpass signal, the criticality and significant influence on the redundancy of data in their further processing in the SC is shown.


2020 ◽  
Vol 1 (2) ◽  
pp. 56-66
Author(s):  
Irma Linda

Background: Early marriages are at high risk of marital failure, poor family quality, young pregnancies at risk of maternal death, and the risk of being mentally ill to foster marriage and be responsible parents. Objective: To determine the effect of reproductive health education on peer groups (peers) on the knowledge and perceptions of adolescents about marriage age maturity. Method: This research uses the Quasi experimental method with One group pre and post test design, conducted from May to September 2018. The statistical analysis used in this study is a paired T test with a confidence level of 95% (α = 0, 05). Results: There is an average difference in the mean value of adolescent knowledge between the first and second measurements is 0.50 with a standard deviation of 1.922. The mean difference in mean scores of adolescent perceptions between the first and second measurements was 4.42 with a standard deviation of 9.611. Conclusion: There is a significant difference between adolescent knowledge on the pretest and posttest measurements with a value of P = 0.002, and there is a significant difference between adolescent perceptions on the pretest and posttest measurements with a value of p = 0.001. Increasing the number of facilities and facilities related to reproductive health education by peer groups (peers) in adolescents is carried out on an ongoing basis at school, in collaboration with local health workers as prevention of risky pregnancy.


1988 ◽  
Vol 60 (1) ◽  
pp. 1-29 ◽  
Author(s):  
E. D. Young ◽  
J. M. Robert ◽  
W. P. Shofner

1. The responses of neurons in the ventral cochlear nucleus (VCN) of decerebrate cats are described with regard to their regularity of discharge and latency. Regularity is measured by estimating the mean and standard deviation of interspike intervals as a function of time during responses to short tone bursts (25 ms). This method extends the usual interspike-interval analysis based on interval histograms by allowing the study of temporal changes in regularity during transient responses. The coefficient of variation (CV), equal to the ratio of standard deviation to mean interspike interval, is used as a measure of irregularity. Latency is measured as the mean and standard deviation of the latency of the first spike in response to short tone bursts, with 1.6-ms rise times. 2. The regularity and latency properties of the usual PST histogram response types are shown. Five major PST response type classes are used: chopper, primary-like, onset, onset-C, and unusual. The presence of a prepotential in a unit's action potentials is also noted; a prepotential implies that the unit is recorded from a bushy cell. 3. Units with chopper PST histograms give the most regular discharge. Three varieties of choppers are found. Chop-S units (regular choppers) have CVs less than 0.35 that are approximately constant during the response; chop-S units show no adaptation of instantaneous rate, as measured by the inverse of the mean interspike interval. Chop-T units have CVs greater than 0.35, show an increase in irregularity during the response and show substantial rate adaptation. Chop-U units have CVs greater than 0.35, show a decrease in irregularity during the response, and show a variety of rate adaptation behaviors, including negative adaptation (an increase in rate during a short-tone response). Irregular choppers (chop-T and chop-U units) rarely have CVs greater than 0.5. Choppers have the longest latencies of VCN units; all three groups have mean latencies at least 1 ms longer than the shortest auditory nerve (AN) fiber mean latencies. 4. Chopper units are recorded from stellate cells in VCN (35, 42). Our results for chopper units suggest a model for stellate cells in which a regularly firing action potential generator is driven by the summation of the AN inputs to the cell, where the summation is low-pass filtered by the membrane capacitance of the cell.(ABSTRACT TRUNCATED AT 400 WORDS)


Cancers ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 2421
Author(s):  
Roberta Fusco ◽  
Vincenza Granata ◽  
Mauro Mattace Raso ◽  
Paolo Vallone ◽  
Alessandro Pasquale De Rosa ◽  
...  

Purpose. To combine blood oxygenation level dependent magnetic resonance imaging (BOLD-MRI), dynamic contrast enhanced MRI (DCE-MRI), and diffusion weighted MRI (DW-MRI) in differentiation of benign and malignant breast lesions. Methods. Thirty-seven breast lesions (11 benign and 21 malignant lesions) pathologically proven were included in this retrospective preliminary study. Pharmaco-kinetic parameters including Ktrans, kep, ve, and vp were extracted by DCE-MRI; BOLD parameters were estimated by basal signal S0 and the relaxation rate R2*; and diffusion and perfusion parameters were derived by DW-MRI (pseudo-diffusion coefficient (Dp), perfusion fraction (fp), and tissue diffusivity (Dt)). The correlation coefficient, Wilcoxon-Mann-Whitney U-test, and receiver operating characteristic (ROC) analysis were calculated and area under the ROC curve (AUC) was obtained. Moreover, pattern recognition approaches (linear discrimination analysis and decision tree) with balancing technique and leave one out cross validation approach were considered. Results. R2* and D had a significant negative correlation (−0.57). The mean value, standard deviation, Skewness and Kurtosis values of R2* did not show a statistical significance between benign and malignant lesions (p > 0.05) confirmed by the ‘poor’ diagnostic value of ROC analysis. For DW-MRI derived parameters, the univariate analysis, standard deviation of D, Skewness and Kurtosis values of D* had a significant result to discriminate benign and malignant lesions and the best result at the univariate analysis in the discrimination of benign and malignant lesions was obtained by the Skewness of D* with an AUC of 82.9% (p-value = 0.02). Significant results for the mean value of Ktrans, mean value, standard deviation value and Skewness of kep, mean value, Skewness and Kurtosis of ve were obtained and the best AUC among DCE-MRI extracted parameters was reached by the mean value of kep and was equal to 80.0%. The best diagnostic performance in the discrimination of benign and malignant lesions was obtained at the multivariate analysis considering the DCE-MRI parameters alone with an AUC = 0.91 when the balancing technique was considered. Conclusions. Our results suggest that the combined use of DCE-MRI, DW-MRI and/or BOLD-MRI does not provide a dramatic improvement compared to the use of DCE-MRI features alone, in the classification of breast lesions. However, an interesting result was the negative correlation between R2* and D.


2021 ◽  
Vol 9 (6) ◽  
pp. 585
Author(s):  
Minghao Wu ◽  
Leen De Vos ◽  
Carlos Emilio Arboleda Chavez ◽  
Vasiliki Stratigaki ◽  
Maximilian Streicher ◽  
...  

The present work introduces an analysis of the measurement and model effects that exist in monopile scour protection experiments with repeated small scale tests. The damage erosion is calculated using the three dimensional global damage number S3D and subarea damage number S3D,i. Results show that the standard deviation of the global damage number σ(S3D)=0.257 and is approximately 20% of the mean S3D, and the standard deviation of the subarea damage number σ(S3D,i)=0.42 which can be up to 33% of the mean S3D. The irreproducible maximum wave height, chaotic flow field and non-repeatable armour layer construction are regarded as the main reasons for the occurrence of strong model effects. The measurement effects are limited to σ(S3D)=0.039 and σ(S3D,i)=0.083, which are minor compared to the model effects.


Sign in / Sign up

Export Citation Format

Share Document