local algorithms
Recently Published Documents


TOTAL DOCUMENTS

101
(FIVE YEARS 16)

H-INDEX

15
(FIVE YEARS 0)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Anna Pietrenko-Dabrowska ◽  
Slawomir Koziel

AbstractSimulation-based optimization of geometry parameters is an inherent and important stage of microwave design process. To ensure reliability, the optimization process is normally carried out using full-wave electromagnetic (EM) simulation tools, which entails significant computational overhead. This becomes a serious bottleneck especially if global search is required (e.g., design of miniaturized structures, dimension scaling over broad ranges of operating frequencies, multi-modal problems, etc.). In pursuit of mitigating the high-cost issue, this paper proposes a novel algorithmic approach to rapid EM-driven global optimization of microwave components. Our methodology incorporates a response feature technology and inverse regression metamodels to enable fast identification of the promising parameter space regions, as well as to yield a good quality initial design, which only needs to be tuned using local routines. The presented technique is illustrated using three microstrip circuits optimized under challenging scenarios, and demonstrated to exhibit global search capability while maintaining low computational cost of the optimization process of only about one hundred of EM simulations of the structure at hand on the average. The performance is shown to be superior in terms of efficacy over both local algorithms and nature-inspired global methods.


Author(s):  
A. V. Tikhonravov ◽  
◽  
Iu. S. Lagutin ◽  
A. A. Lagutina ◽  
D. V. Lukyanenko ◽  
...  

The reverse engineering problem of determining the layer thicknesses of deposited optical coatings from on-line monochromatic measurements is considered. To solve this inverse problem, non-local algorithms are proposed that use all the data accumulated during the deposition process. For the proposed algorithms, the accuracy of solving the inverse problem is compared in the presence of random and systematic errors. It is shown that in the case when the measured data contains only random errors, the best accuracy is provided by the algorithm based on minimizing the discrepancy functional. In the case of systematic errors, the advantage of one the algorithms based on minimizing the variance functionals is demonstrated. Key words: inverse problems, reverse engineering, optical coatings, thin films.


Author(s):  
Chris Sherlock ◽  
Anthony Lee

AbstractA delayed-acceptance version of a Metropolis–Hastings algorithm can be useful for Bayesian inference when it is computationally expensive to calculate the true posterior, but a computationally cheap approximation is available; the delayed-acceptance kernel targets the same posterior as its associated “parent” Metropolis-Hastings kernel. Although the asymptotic variance of the ergodic average of any functional of the delayed-acceptance chain cannot be less than that obtained using its parent, the average computational time per iteration can be much smaller and so for a given computational budget the delayed-acceptance kernel can be more efficient. When the asymptotic variance of the ergodic averages of all $$L^2$$ L 2 functionals of the chain are finite, the kernel is said to be variance bounding. It has recently been noted that a delayed-acceptance kernel need not be variance bounding even when its parent is. We provide sufficient conditions for inheritance: for non-local algorithms, such as the independence sampler, the discrepancy between the log density of the approximation and that of the truth should be bounded; for local algorithms, two alternative sets of conditions are provided. As a by-product of our initial, general result we also supply sufficient conditions on any pair of proposals such that, for any shared target distribution, if a Metropolis-Hastings kernel using one of the proposals is variance bounding then so is the Metropolis-Hastings kernel using the other proposal.


2021 ◽  
Vol 3 (3) ◽  
pp. 525-541
Author(s):  
Muhammad Rehman Zafar ◽  
Naimul Khan

Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation methods result in shifts in data and instability in the generated explanations, where for the same prediction, different explanations can be generated. These are critical issues that can prevent deployment of LIME in sensitive domains. We propose a deterministic version of LIME. Instead of random perturbation, we utilize Agglomerative Hierarchical Clustering (AHC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a simple model (i.e., linear model or decision tree) is trained over the selected cluster to generate the explanations. Experimental results on six public (three binary and three multi-class) and six synthetic datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability and faithfulness of DLIME compared to LIME.


2021 ◽  
Vol 2021 (3) ◽  
Author(s):  
Claudio Bonanno ◽  
Claudio Bonati ◽  
Massimo D’Elia

Abstract We simulate 4d SU(N) pure-gauge theories at large N using a parallel tempering scheme that combines simulations with open and periodic boundary conditions, implementing the algorithm originally proposed by Martin Hasenbusch for 2d CPN–1 models. That allows to dramatically suppress the topological freezing suffered from standard local algorithms, reducing the autocorrelation time of Q2 up to two orders of magnitude. Using this algorithm in combination with simulations at non-zero imaginary θ we are able to refine state-of-the-art results for the large-N behavior of the quartic coefficient of the θ-dependence of the vacuum energy b2, reaching an accuracy comparable with that of the large-N limit of the topological susceptibility.


2021 ◽  
Vol 143 (8) ◽  
Author(s):  
Brian Chell ◽  
Steven Hoffenson ◽  
Cory J. G. Philippe ◽  
Mark R. Blackburn

Abstract Multifidelity optimization leverages the fast run times of low-fidelity models with the accuracy of high-fidelity models (HFMs), in order to conserve computing resources while still reaching optimal solutions. This work focuses on the multifidelity multidisciplinary optimization of an aircraft system model with finite element analysis and computational fluid dynamics simulations in the loop. A two-step filtering method is used where a lower fidelity model is optimized, and then the solution is used as a starting point for a higher-fidelity optimization routine. By starting the high-fidelity routine at a nearly optimal region of the design space, the computing resources required for optimization are expected to decrease when using local algorithms. Results show that, when using surrogates for the lower fidelity models, the multifidelity workflows save statistically significant amounts of time over optimizing the original HFM alone. However, the impact on solution quality varies depending on the model behavior and optimization algorithm.


Sign in / Sign up

Export Citation Format

Share Document