scholarly journals Parametric Optimization of Integrated Circuit Assembly Process: An Evolutionary Computing-Based Approach

2021 ◽  
Author(s):  
Tatjana Sibalija

Strict demands for very tight tolerances and increasing complexity in the semiconductors’ assembly impose a need for an accurate parametric design that deals with multiple conflicting requirements. This paper presents application of the advanced optimization methodology, based on evolutionary algorithms (EAs), on two studies addressing parametric optimization of the wire bonding process in the semiconductors’ assembly. The methodology involves statistical pre-processing of the experimental data, followed by an accurate process modeling by artificial neural networks (ANNs). Using the neural model, the process parameters are optimized by four metaheuristics: the two most commonly used algorithms – genetic algorithm (GA) and simulated annealing (SA), and the two newly designed algorithms that have been rarely utilized in semiconductor assembly optimizations – teaching-learning based optimization (TLBO) and Jaya algorithm. The four algorithm performances in two wire bonding studies are benchmarked, considering the accuracy of the obtained solutions and the convergence rate. In addition, influence of the algorithm hyper-parameters on the algorithms effectiveness is rigorously discussed, and the directions for the algorithm selection and settings are suggested. The results from two studies clearly indicate superiority of the TLBO and Jaya algorithms over GA and SA, especially in terms of the solution accuracy and the built-in algorithm robustness. Furthermore, the proposed evolutionary computing-based optimization methodology significantly outperforms the four frequently used methods from the literature, explicitly demonstrating effectiveness and accuracy in locating global optimum for delicate optimization problems.

2021 ◽  
Vol 6 (3) ◽  
pp. 917-933 ◽  
Author(s):  
Kenneth Loenbaek ◽  
Christian Bak ◽  
Michael McWilliam

Abstract. A novel wind turbine rotor optimization methodology is presented. Using an assumption of radial independence it is possible to obtain an optimal relationship between the global power (CP) and load coefficient (CT, CFM) through the use of Karush–Kuhn–Tucker (KKT) multipliers, leaving an optimization problem that can be solved at each radial station independently. It allows solving load constraint power and annual energy production (AEP) optimization problems where the optimization variables are only the KKT multipliers (scalars), one for each of the constraints. For the paper, two constraints, namely the thrust and blade root flap moment, are used, leading to two optimization variables. Applying the optimization methodology to maximize power (P) or annual energy production (AEP) for a given thrust and blade root flap moment, but without a cost function, leads to the same overall result with the global optimum being unbounded in terms of rotor radius (R̃) with a global optimum being at R̃→∞. The increase in power and AEP is in this case ΔP=50 % and ΔAEP=70 %, with a baseline being the Betz optimum rotor. With a simple cost function and with the same setup of the problem, a power-per-cost (PpC) optimization resulted in a power-per-cost increase of ΔPpC=4.2 % with a radius increase of ΔR=7.9 % as well as a power increase of ΔP=9.1 %. This was obtained while keeping the same flap moment and reaching a lower thrust of ΔT=-3.8 %. The equivalent for AEP-per-cost (AEPpC) optimization leads to increased cost efficiency of ΔAEPpC=2.9 % with a radius increase of ΔR=17 % and an AEP increase of ΔAEP=13 %, again with the same, maximum flap moment, while the maximum thrust is −9.0 % lower than the baseline.


2020 ◽  
Author(s):  
Kenneth Loenbaek ◽  
Christian Bak ◽  
Michael McWilliam

Abstract. A novel wind turbine rotor optimization methodology is presented. Using an assumption of radial independence it is possible to obtain an optimal relationship between the global power- (CP) and load-coefficient (CT, CFM) through the use of KKT-multipliers, leaving an optimization problem that can be solved at each radial station independently. It allows to solve load constraint power and Annual-Energy-Production (AEP) optimization problems where the optimization variables are only the KKT-multipliers (scalars), one for each of the constraint. For the paper two constraints, namely the thrust and blade-root-flap-moment is used, leading to two optimization variables. Applying the optimization methodology to maximize power (P) or Annual-Energy-Production (AEP) for a given thrust and blade-root-flap-moment, but without a cost-function, leads to the same overall result with the global optimum being unbounded in terms of rotor radius (R~) with a global optimum being at R~ → ∞. The increase in power and AEP is in this case ΔP = 50 % and ΔAEP = 70 %, with a baseline being the Betz-optimum rotor. With a simple cost function and with the same setup of the problem a Power-per-Cost (PpC) optimization resulted in a Power-per-Cost increase of ΔPpC = 4.2 % with a radius increase of ΔR = 7.9 % as well as a power increase of ΔP = 9.1 %. This was obtained while keeping the same flap-moment and reaching a lower thrust of ΔT = −3.8 %. The equivalent for AEP-per-Cost (AEPpC) optimization leads to an increased cost efficiency of ΔAEPpC = 2.9 % with a radius increase of a ΔR = 17 % and an AEP increase of ΔAEP = 13 %, again with the same, maximum flap-moment, while the maximum thrust is −9.0 % lower than the baseline.


Author(s):  
Patrick Mehlitz ◽  
Leonid I. Minchenko

AbstractThe presence of Lipschitzian properties for solution mappings associated with nonlinear parametric optimization problems is desirable in the context of, e.g., stability analysis or bilevel optimization. An example of such a Lipschitzian property for set-valued mappings, whose graph is the solution set of a system of nonlinear inequalities and equations, is R-regularity. Based on the so-called relaxed constant positive linear dependence constraint qualification, we provide a criterion ensuring the presence of the R-regularity property. In this regard, our analysis generalizes earlier results of that type which exploited the stronger Mangasarian–Fromovitz or constant rank constraint qualification. Afterwards, we apply our findings in order to derive new sufficient conditions which guarantee the presence of R-regularity for solution mappings in parametric optimization. Finally, our results are used to derive an existence criterion for solutions in pessimistic bilevel optimization and a sufficient condition for the presence of the so-called partial calmness property in optimistic bilevel optimization.


2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


Author(s):  
S.L. Khoury ◽  
D.J. Burkhard ◽  
D.P. Galloway ◽  
T.A. Scharr

2021 ◽  
Vol 12 (4) ◽  
pp. 98-116
Author(s):  
Noureddine Boukhari ◽  
Fatima Debbat ◽  
Nicolas Monmarché ◽  
Mohamed Slimane

Evolution strategies (ES) are a family of strong stochastic methods for global optimization and have proved their capability in avoiding local optima more than other optimization methods. Many researchers have investigated different versions of the original evolution strategy with good results in a variety of optimization problems. However, the convergence rate of the algorithm to the global optimum stays asymptotic. In order to accelerate the convergence rate, a hybrid approach is proposed using the nonlinear simplex method (Nelder-Mead) and an adaptive scheme to control the local search application, and the authors demonstrate that such combination yields significantly better convergence. The new proposed method has been tested on 15 complex benchmark functions and applied to the bi-objective portfolio optimization problem and compared with other state-of-the-art techniques. Experimental results show that the performance is improved by this hybridization in terms of solution eminence and strong convergence.


2017 ◽  
Vol 2639 (1) ◽  
pp. 110-118 ◽  
Author(s):  
André V. Moreira ◽  
Tien F. Fwa ◽  
Joel R. M. Oliveira ◽  
Lino Costa

Pavement maintenance and rehabilitation programming requires the consideration of conflicting objectives to optimize its life-cycle costs. While there are several approaches to solve multiobjective problems for pavement management systems, when user costs or environmental impacts are considered the optimal solutions are often impractical to be accepted by road agencies, given the dominating share of user costs in the total life-cycle costs. This paper presents a two-stage optimization methodology that considers maximization of pavement quality and minimization of agency costs as the objectives to be optimized at the pavement section level, while at the network level, the objectives are to minimize agency and user costs. The main goal of this approach is to provide decision makers with a range of optimal solutions from which a practically implementable one could be selected by the agency. A sensitivity analysis and some trade-off graphics illustrate the importance in balancing all the objectives to obtain reasonable solutions for highway agencies. Multiobjective optimization problems at both levels are solved using genetic algorithms. The results of a case study indicate the applicability of the methodology.


2021 ◽  
Author(s):  
Zuanjia Xie ◽  
Chunliang Zhang ◽  
Haibin Ouyang ◽  
Steven Li ◽  
Liqun Gao

Abstract Jaya algorithm is an advanced optimization algorithm, which has been applied to many real-world optimization problems. Jaya algorithm has better performance in some optimization field. However, Jaya algorithm exploration capability is not better. In order to enhance exploration capability of the Jaya algorithm, a self-adaptively commensal learning-based Jaya algorithm with multi-populations (Jaya-SCLMP) is presented in this paper. In Jaya-SCLMP, a commensal learning strategy is used to increase the probability of finding the global optimum, in which the person history best and worst information is used to explore new solution area. Moreover, a multi-populations strategy based on Gaussian distribution scheme and learning dictionary is utilized to enhance the exploration capability, meanwhile every sub-population employed three Gaussian distributions at each generation, roulette wheel selection is employed to choose a scheme based on learning dictionary. The performance of Jaya-SCLMP is evaluated based on 28 CEC 2013 unconstrained benchmark problems. In addition, three reliability problems, i.e. complex (bridge) system, series system and series-parallel system are selected. Compared with several Jaya variants and several state-of-the-art other algorithms, the experimental results reveal that Jaya-SCLMP is effective.


Author(s):  
Shailendra Aote ◽  
Mukesh M. Raghuwanshi

To solve the problems of optimization, various methods are provided in different domain. Evolutionary computing (EC) is one of the methods to solve these problems. Mostly used EC techniques are available like Particle Swarm Optimization (PSO), Genetic Algorithm (GA) and Differential Evolution (DE). These techniques have different working structure but the inner working structure is same. Different names and formulae are given for different task but ultimately all do the same. Here we tried to find out the similarities among these techniques and give the working structure in each step. All the steps are provided with proper example and code written in MATLAB, for better understanding. Here we started our discussion with introduction about optimization and solution to optimization problems by PSO, GA and DE. Finally, we have given brief comparison of these.


2016 ◽  
pp. 450-475
Author(s):  
Dipti Singh ◽  
Kusum Deep

Due to their wide applicability and easy implementation, Genetic algorithms (GAs) are preferred to solve many optimization problems over other techniques. When a local search (LS) has been included in Genetic algorithms, it is known as Memetic algorithms. In this chapter, a new variant of single-meme Memetic Algorithm is proposed to improve the efficiency of GA. Though GAs are efficient at finding the global optimum solution of nonlinear optimization problems but usually converge slow and sometimes arrive at premature convergence. On the other hand, LS algorithms are fast but are poor global searchers. To exploit the good qualities of both techniques, they are combined in a way that maximum benefits of both the approaches are reaped. It lets the population of individuals evolve using GA and then applies LS to get the optimal solution. To validate our claims, it is tested on five benchmark problems of dimension 10, 30 and 50 and a comparison between GA and MA has been made.


Sign in / Sign up

Export Citation Format

Share Document