benchmark test
Recently Published Documents


TOTAL DOCUMENTS

284
(FIVE YEARS 61)

H-INDEX

23
(FIVE YEARS 5)

2021 ◽  
Vol 27 (12) ◽  
pp. 2719-2745
Author(s):  
Mikhail V. POMAZANOV

Subject. This article deals with the issues of validation of the consistency of rating-based model forecasts. Objectives. The article aims to provide developers and validators of rating-based models with a practical fundamental test for benchmarking study of the estimated default probability values obtained as a result of the application of models used in the rating system. Methods. For the study, I used the classical interval approach to testing of statistical hypotheses focused on the subject area of calibration of rating systems. Results. In addition to the generally accepted tests for the correspondence of the predicted probabilities of default of credit risk objects to the historically realized values, the article proposes a new statistical test that corrects the shortcomings of the generally accepted ones, focused on "diagnosing" the consistency of the implemented discrimination of objects by the rating model. Examples of recognizing the reasons for a negative test result and negative consequences for lending are given while maintaining the current settings of the rating model. In addition to the bias in the assessment of the total frequency of defaults in the loan portfolio, the proposed method makes it possible to objectively reveal the inadequacy of discrimination against borrowers with a calibrated rating model, diagnose the “disease” of the rating model. Conclusions and Relevance. The new practical benchmark test allows to reject the hypothesis about the consistency of assessing the probability of default by the rating model at a given level of confidence and available historical data. The test has the advantage of practical interpretability based on its results, it is possible to draw a conclusion about the direction of the model correction. The offered test can be used in the process of internal validation by the bank of its own rating models, which is required by the Bank of Russia for approaches based on internal ratings.


Author(s):  
Olav Schiemann ◽  
Caspar A. Heubach ◽  
Dinar Abdullin ◽  
Katrin Ackermann ◽  
Mykhailo Azarkh ◽  
...  
Keyword(s):  

2021 ◽  
Vol 155 (11) ◽  
pp. 114102
Author(s):  
Subrata Jana ◽  
Hemanadhan Myneni ◽  
Szymon Śmiga ◽  
Lucian A. Constantin ◽  
Prasanjit Samal

2021 ◽  
Author(s):  
John D. Bartlett ◽  
Duane Storti

Abstract The rapid development of parallelization technology over the recent decades has provided a promising avenue for the acceleration of meshfree simulation methods. One such method, peridynamics, is particularly well-suited for parallelization due to the simplicity of the operations which must occur at each material point. However, while MPI-based parallelization (Message-Passing Interface; a method for CPU-based parallelization) of peridynamic problems is commonplace, GPU parallelization of peridynamics has received far less attention. While GPU technology may have once been an inferior option to MPI parallelization for peridynamics, modern GPU cards are more than capable of handling substantially sized peridynamics problems. This paper presents the parallelization of the peridynamic method for single-card GPU computing, providing a schematic for a compact parallel approach. The resulting method is tested with CUDA on a NVIDIA Tesla P100 card with 16 GB of memory. The per-node memory requirements for each data structure used are evaluated, as well as the per-node execution times for each operation in a million-node benchmark test. This setup is shown to provide speedup factors over 200 for problems sized up to several million nodes, therefore indicating such a GPU is more than adequate for the single-card parallelization of the peridynamic method.


Processes ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 1418
Author(s):  
Olympia Roeva ◽  
Dafina Zoteva ◽  
Velislava Lyubenova

In this paper, the artificial bee colony (ABC) algorithm is hybridized with the genetic algorithm (GA) for a model parameter identification problem. When dealing with real-world and large-scale problems, it becomes evident that concentrating on a sole metaheuristic algorithm is somewhat restrictive. A skilled combination between metaheuristics or other optimization techniques, a so-called hybrid metaheuristic, can provide more efficient behavior and greater flexibility. Hybrid metaheuristics combine the advantages of one algorithm with the strengths of another. ABC, based on the foraging behavior of honey bees, and GA, based on the mechanics of nature selection, are among the most efficient biologically inspired population-based algorithms. The performance of the proposed ABC-GA hybrid algorithm is examined, including classic benchmark test functions. To demonstrate the effectiveness of ABC-GA for a real-world problem, parameter identification of an Escherichia coli MC4110 fed-batch cultivation process model is considered. The computational results of the designed algorithm are compared to the results of different hybridized biologically inspired techniques (ant colony optimization (ACO) and firefly algorithm (FA))—hybrid algorithms as ACO-GA, GA-ACO and ACO-FA. The algorithms are applied to the same problems—a set of benchmark test functions and the real nonlinear optimization problem. Taking into account the overall searchability and computational efficiency, the results clearly show that the proposed ABC–GA algorithm outperforms the considered hybrid algorithms.


Crystals ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 916
Author(s):  
Dili Shen ◽  
Wuyi Ming ◽  
Xinggui Ren ◽  
Zhuobin Xie ◽  
Yong Zhang ◽  
...  

Lévy flights random walk is one of key parts in the cuckoo search (CS) algorithm to update individuals. The standard CS algorithm adopts the constant scale factor for this random walk. This paper proposed an improved beta distribution cuckoo search (IBCS) for this factor in the CS algorithm. In terms of local characteristics, the proposed algorithm makes the scale factor of the step size in Lévy flights showing beta distribution in the evolutionary process. In terms of the overall situation, the scale factor shows the exponential decay trend in the process. The proposed algorithm makes full use of the advantages of the two improvement strategies. The test results show that the proposed strategy is better than the standard CS algorithm or others improved by a single improvement strategy, such as improved CS (ICS) and beta distribution CS (BCS). For the six benchmark test functions of 30 dimensions, the average rankings of the CS, ICS, BCS, and IBCS algorithms are 3.67, 2.67, 1.5, and 1.17, respectively. For the six benchmark test functions of 50 dimensions, moreover, the average rankings of the CS, ICS, BCS, and IBCS algorithms are 2.83, 2.5, 1.67, and 1.0, respectively. Confirmed by our case study, the performance of the ABCS algorithm was better than that of standard CS, ICS or BCS algorithms in the process of EDM. For example, under the single-objective optimization convergence of MRR, the iteration number (13 iterations) of the CS algorithm for the input process parameters, such as discharge current, pulse-on time, pulse-off time, and servo voltage, was twice that (6 iterations) of the IBCS algorithm. Similar, the iteration number (17 iterations) of BCS algorithm for these parameters was twice that (8 iterations) of the IBCS algorithm under the single-objective optimization convergence of Ra. Therefore, it strengthens the CS algorithm’s accuracy and convergence speed.


2021 ◽  
Vol 15 ◽  
Author(s):  
Weishi Li ◽  
Kuanting Wang ◽  
Shiaofen Fang

Background: Selective laser melting is the best-established additive manufacturing technology for high-quality metal part manufacturing. However, the widespread acceptance of the technology is still underachieved, especially in critical applications, due to the absence of a thorough understanding of the technology, although several benchmark test artifacts have been developed to characterize the performance of selective laser melting machines. Objective: The objective of this paper is to inspire new designs of benchmark test artifacts to understand the selective laser melting process better and promote the acceptance of the selective laser melting technology. Method: The existing benchmark test artifacts for selective laser melting are analyzed comparatively, and the design guidelines are discussed. Results: The modular approach should still be adopted in designing new benchmark test artifacts in the future, and task-specific test artifacts may also need to be considered further to validate the machine performance for critical applications. The inclusion of the design model in the manufactured artifact, instead of the conformance to the design specifications, should be evaluated after the artifact is measured for the applications requiring high-dimensional accuracy and high surface quality. Conclusion: The benchmark test artifact for selective laser melting is still under development, and a breakthrough of the measuring technology for internal and/or inaccessible features will be beneficial for understanding the technology.


Mathematics ◽  
2021 ◽  
Vol 9 (13) ◽  
pp. 1477
Author(s):  
Chun-Yao Lee ◽  
Guang-Lin Zhuo

This paper proposes a hybrid whale optimization algorithm (WOA) that is derived from the genetic and thermal exchange optimization-based whale optimization algorithm (GWOA-TEO) to enhance global optimization capability. First, the high-quality initial population is generated to improve the performance of GWOA-TEO. Then, thermal exchange optimization (TEO) is applied to improve exploitation performance. Next, a memory is considered that can store historical best-so-far solutions, achieving higher performance without adding additional computational costs. Finally, a crossover operator based on the memory and a position update mechanism of the leading solution based on the memory are proposed to improve the exploration performance. The GWOA-TEO algorithm is then compared with five state-of-the-art optimization algorithms on CEC 2017 benchmark test functions and 8 UCI repository datasets. The statistical results of the CEC 2017 benchmark test functions show that the GWOA-TEO algorithm has good accuracy for global optimization. The classification results of 8 UCI repository datasets also show that the GWOA-TEO algorithm has competitive results with regard to comparison algorithms in recognition rate. Thus, the proposed algorithm is proven to execute excellent performance in solving optimization problems.


Sign in / Sign up

Export Citation Format

Share Document