stopping criterion
Recently Published Documents


TOTAL DOCUMENTS

300
(FIVE YEARS 67)

H-INDEX

16
(FIVE YEARS 3)

2022 ◽  
Author(s):  
Lázaro Lugo ◽  
Carlos Segura ◽  
Gara Miranda

Abstract The Linear Ordering Problem (LOP) is a very popular NP-hard combinatorial optimization problem with many practical applications that may require the use of large instances. The Linear Ordering Library (LOLIB) gathers a set of standard benchmarks widely used in the validation of solvers for the LOP. Among them, the xLOLIB2 collects some of the largest and most challenging instances in current literature. In this work, we present new best-known solutions for each of the 200 complex instances that comprises xLOLIB2. Moreover, the proposal devised in this research is able to achieve all current best-known solutions in the rest of instances of LOLIB and improve them in other 93 cases out of 485, meaning that important advances in terms of quality and robustness are attained. This important advance in the field of the LOP has been possible thanks to the development of a novel Memetic Algorithm (MA) that was designed by taking into account some of the weaknesses of state-of-the-art LOP solvers. One of the keys to success is that the novel proposal allows for a gradual shift from exploration to exploitation, which is done by taking into account the stopping criterion and elapsed period of execution to alter the internal decisions taken by the optimizer. The novel diversity-aware proposal is called the Memetic Algorithm with Explicit Diversity Management (MA-EDM) and extensive comparisons against state-of-the-art techniques provide insights into the reasons for the superiority of MA-EDM.


Author(s):  
Niels Koester ◽  
Oliver Koenig ◽  
Alexander Thaler ◽  
Oszkár Bíró

Purpose The Cauer ladder network (CLN) model order reduction (MOR) method is applied to an industrial inductor. This paper aims to to anaylse the influence of different meshes on the CLN method and their parameters. Design/methodology/approach The industrial inductor is simulated with the CLN method for different meshes. Meshes considering skin effect are compared with equidistant meshes. The inductor is also simulated with the eddy current finite element method (ECFEM) for frequencies 1 kHz to 1 MHz. The solution of the CLN method is compared with the ECFEM solutions for the current density in the conductor and the total impedance. Findings The increase of resistance resulting from the skin effect can be modelled with the CLN method, using a uniform mesh with elements much larger than the skin depth. Meshes taking account of the skin depth are only needed if the electromagnetic fields have to be reconstructed. Additionally, the convergence of the impedance is used to define a stopping criterion without the need for a benchmark solution. Originality/value The work shows that the CLN method can generate a network, which is capable of mimicking the increase of resistance usually accompanied by the skin effect without using a mesh that takes the skin depth into account. In addition, the proposed stopping criterion makes it possible to use the CLN method as an a priori MOR technique.


2021 ◽  
Author(s):  
Mirka Henninger ◽  
Rudolf Debelak ◽  
Carolin Strobl

To detect differential item functioning (DIF), Rasch trees search for optimal splitpoints in covariates and identify subgroups of respondents in a data-driven way. To determine whether and in which covariate a split should be performed, Rasch trees use statistical significance tests. Consequently, Rasch trees are more likely to label small DIF effects as significant in larger samples. This leads to larger trees, which split the sample into more subgroups. What would be more desirable is an approach that is driven more by effect size rather than sample size. In order to achieve this, we suggest to implement an additional stopping criterion: the popular ETS classification scheme based on the Mantel-Haenszel odds ratio. This criterion helps us to evaluate whether a split in a Rasch tree is based on a substantial or an ignorable difference in item parameters, and it allows the Rasch tree to stop growing when DIF between the identified subgroups is small. Furthermore, it supports identifying DIF items and quantifying DIF effect sizes in each split. Based on simulation results, we conclude that the Mantel-Haenszel effect size further reduces unnecessary splits in Rasch trees under the null hypothesis, or when the sample size is large but DIF effects are negligible. To make the stopping criterion easy-to-use for applied researchers, we have implemented the procedure in the statistical software R. Finally, we discuss how DIF effects between different nodes in a Rasch tree can be interpreted and emphasize the importance of purification strategies for the Mantel-Haenszel procedure on tree stopping and DIF item classification.


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2175
Author(s):  
Miguel Martín ◽  
Antonio Jiménez-Martín ◽  
Alfonso Mateos ◽  
Josefa Z. Hernández

A/B testing is used in digital contexts both to offer a more personalized service and to optimize the e-commerce purchasing process. A personalized service provides customers with the fastest possible access to the contents that they are most likely to use. An optimized e-commerce purchasing process reduces customer effort during online purchasing and assures that the largest possible number of customers place their order. The most widespread A/B testing method is to implement the equivalent of RCT (randomized controlled trials). Recently, however, some companies and solutions have addressed this experimentation process as a multi-armed bandit (MAB). This is known in the A/B testing market as dynamic traffic distribution. A complementary technique used to optimize the performance of A/B testing is to improve the experiment stopping criterion. In this paper, we propose an adaptation of A/B testing to account for possibilistic reward (PR) methods, together with the definition of a new stopping criterion also based on PR methods to be used for both classical A/B testing and A/B testing based on MAB algorithms. A comparative numerical analysis based on the simulation of real scenarios is used to analyze the performance of the proposed adaptations in both Bernoulli and non-Bernoulli environments. In this analysis, we show that the possibilistic reward method PR3 produced the lowest mean cumulative regret in non-Bernoulli environments, which proved to have a high confidence level and be highly stable as demonstrated by low standard deviation measures. PR3 behaves exactly the same as Thompson sampling in Bernoulli environments. The conclusion is that PR3 can be used efficiently in both environments in combination with the value remaining stopping criterion in Bernoulli environments and the PR3 bounds stopping criterion for non-Bernoulli environments.


Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1392
Author(s):  
Zhiping Xu ◽  
Lin Wang ◽  
Shaohua Hong

In this paper, a joint early stopping criterion based on cross entropy (CE), named joint CE criterion, is presented for double-protograph low-density parity-check (DP-LDPC) codes-based joint source-channel coding (JSCC) systems in images transmission to reduce the decoding complexity and decrease the decoding delay. The proposed early stopping criterion adopts the CE from the output likelihood ratios (LLRs) of the joint decoder. Moreover, a special phenomenon named asymmetry oscillation-like convergence (AOLC) in the changing process of CE is uncovered in the source decoder and channel decoder of this system meanwhile, and the proposed joint CE criterion can reduce the impact from the AOLC phenomenon. Comparing to the counterparts, the results show that the joint CE criterion can perform well in the decoding complexity and decoding latency in the low–moderate signal-to-noise ratio (SNR) region and achieve performance improvement in the high SNR region with appropriate parameters, which also demonstrates that this system with joint CE is a low-latency and low-power system.


Author(s):  
Shoji Itoh

AbstractIn this paper, improved algorithms are proposed for preconditioned bi-Lanczos-type methods with residual norm minimization for the stable solution of systems of linear equations. In particular, preconditioned algorithms pertaining to the bi-conjugate gradient stabilized method (BiCGStab) and the generalized product-type method based on the BiCG (GPBiCG) have been improved. These algorithms are more stable compared to conventional alternatives. Further, a stopping criterion changeover is proposed for use with these improved algorithms. This results in higher accuracy (lower true relative error) compared to the case where no changeover is done. Numerical results confirm the improvements with respect to the preconditioned BiCGStab, the preconditioned GPBiCG, and stopping criterion changeover. These improvements could potentially be applied to other preconditioned algorithms based on bi-Lanczos-type methods.


Author(s):  
Roslina Mohamad ◽  
Mohamad Yusuf Mat Nasir ◽  
Nuzli Mohamad Anas

One of the most often-used stopping criteria is the cross-entropy stopping criterion (CESC). The CESC can stop turbo decoder iterations early by calculating mutual information improvements while maintaining bit error rate (BER) performance. Most research on iterative turbo decoding stopping criteria has utilised low-modulation methods, such as binary phase-shift keying. However, a high-speed network requires high modulation to transfer data at high speeds. Hence, a high modulation technique needs to be integrated into the CESC to match its speed. Therefore, the present paper investigated and analysed the effects of the CESC and quadrature amplitude modulation (QAM) on iterative turbo decoding. Three thresholds were simulated and tested under four situations: different code rates, different QAM formats, different code generators, and different frame sizes. The results revealed that in most situations, the use of CESC is suitable only when the signal-to-noise ratio (SNR) is high. This is because the CESC significantly reduces the average iteration number (AIN) while maintaining the BER. The CESC can terminate early at a high SNR and save more than 40% AIN compared with the fixed stopping criterion. Meanwhile, at a low SNR, the CESC fails to terminate early, which results in maximum AIN.


Author(s):  
Jiyeon Ki ◽  
Areum Lim ◽  
Daewon Paeng ◽  
Sangjoon Park

Sign in / Sign up

Export Citation Format

Share Document