scholarly journals Development of Discrete Adjoint Approach Based on the Lattice Boltzmann Method

2014 ◽  
Vol 6 ◽  
pp. 230854 ◽  
Author(s):  
Mohamad Hamed Hekmat ◽  
Masoud Mirzaei

The purpose of this research is to present a general procedure with low implementation cost to develop the discrete adjoint approach for solving optimization problems based on the LB method. Initially, the macroscopic and microscopic discrete adjoint equations and the cost function gradient vector are derived mathematically, in detail, using the discrete LB equation. Meanwhile, for an elementary case, the analytical evaluation of the macroscopic and microscopic adjoint variables and the cost function gradients are presented. The investigation of the derivation procedure shows that the simplicity of the Boltzmann equation, as an alternative for the Navier-Stokes (NS) equations, can facilitate the process of extracting the discrete adjoint equation. Therefore, the implementation of the discrete adjoint equation based on the LB method needs fewer attempts than that of the NS equations. Finally, this approach is validated for the sample test case, and the results gained from the macroscopic and microscopic discrete adjoint equations are compared in an inverse optimization problem. The results show that the convergence rate of the optimization algorithm using both equations is identical and the evaluated gradients have a very good agreement with each other.

2014 ◽  
Vol 6 (5) ◽  
pp. 570-589 ◽  
Author(s):  
Mohamad Hamed Hekmat ◽  
Masoud Mirzaei

AbstractThe significance of flow optimization utilizing the lattice Boltzmann (LB) method becomes obvious regarding its advantages as a novel flow field solution method compared to the other conventional computational fluid dynamics techniques. These unique characteristics of the LB method form the main idea of its application to optimization problems. In this research, for the first time, both continuous and discrete adjoint equations were extracted based on the LB method using a general procedure with low implementation cost. The proposed approach could be performed similarly for any optimization problem with the corresponding cost function and design variables vector. Moreover, this approach was not limited to flow fields and could be employed for steady as well as unsteady flows. Initially, the continuous and discrete adjoint LB equations and the cost function gradient vector were derived mathematically in detail using the continuous and discrete LB equations in space and time, respectively. Meanwhile, new adjoint concepts in lattice space were introduced. Finally, the analytical evaluation of the adjoint distribution functions and the cost function gradients was carried out.


2020 ◽  
Vol 30 (6) ◽  
pp. 1645-1663
Author(s):  
Ömer Deniz Akyildiz ◽  
Dan Crisan ◽  
Joaquín Míguez

Abstract We introduce and analyze a parallel sequential Monte Carlo methodology for the numerical solution of optimization problems that involve the minimization of a cost function that consists of the sum of many individual components. The proposed scheme is a stochastic zeroth-order optimization algorithm which demands only the capability to evaluate small subsets of components of the cost function. It can be depicted as a bank of samplers that generate particle approximations of several sequences of probability measures. These measures are constructed in such a way that they have associated probability density functions whose global maxima coincide with the global minima of the original cost function. The algorithm selects the best performing sampler and uses it to approximate a global minimum of the cost function. We prove analytically that the resulting estimator converges to a global minimum of the cost function almost surely and provide explicit convergence rates in terms of the number of generated Monte Carlo samples and the dimension of the search space. We show, by way of numerical examples, that the algorithm can tackle cost functions with multiple minima or with broad “flat” regions which are hard to minimize using gradient-based techniques.


2005 ◽  
Vol 15 (09) ◽  
pp. 1349-1369 ◽  
Author(s):  
PIERLUIGI CONTUCCI ◽  
CRISTIAN GIARDINÀ ◽  
CLAUDIO GIBERTI ◽  
CECILIA VERNIA

We consider optimization problems for complex systems in which the cost function has a multivalleyed landscape. We introduce a new class of dynamical algorithms which, using a suitable annealing procedure coupled with a balanced greedy-reluctant strategy drive the systems towards the deepest minimum of the cost function. Results are presented for the Sherrington–Kirkpatrick model of spin-glasses.


Author(s):  
Tad Gonsalves ◽  
◽  
Shinichiro Baba ◽  
Kiyoshi Itoh ◽  

The “survival of the fittest” strategy of the Genetic Algorithm has been found to be robust and is widely used in solving combinatorial optimization problems like job scheduling, circuit design, antenna array design, etc. In this paper, we discuss the application of the Genetic Algorithm to the operational optimization of collaborative systems, illustrating our strategy with a practical example of a clinic system. Collaborative systems (also known as co-operative systems) are modeled as server-client systems in which a group of collaborators come together to provide service to end-users. The cost function to be optimized is the sum of the service cost and the waiting cost. Service cost is due to hiring professionals and/or renting equipment that provide service to customers in the collaborative system. Waiting cost is incurred when customers who are made to wait in long queues balk, renege or do not come to the system for service a second time. The number of servers operating at each of the collaborative places, and the average service time of each of the servers are the decision variables, while server utilization is a constraint. The Genetic Algorithm tailored to collaborative systems finds the minimum value of the cost function under these operational constraints.


2021 ◽  
Vol 11 (21) ◽  
pp. 9828
Author(s):  
Vincent A. Cicirello

The runtime behavior of Simulated Annealing (SA), similar to other metaheuristics, is controlled by hyperparameters. For SA, hyperparameters affect how “temperature” varies over time, and “temperature” in turn affects SA’s decisions on whether or not to transition to neighboring states. It is typically necessary to tune the hyperparameters ahead of time. However, there are adaptive annealing schedules that use search feedback to evolve the “temperature” during the search. A classic and generally effective adaptive annealing schedule is the Modified Lam. Although effective, the Modified Lam can be sensitive to the scale of the cost function, and is sometimes slow to converge to its target behavior. In this paper, we present a novel variation of the Modified Lam that we call Self-Tuning Lam, which uses early search feedback to auto-adjust its self-adaptive behavior. Using a variety of discrete and continuous optimization problems, we demonstrate the ability of the Self-Tuning Lam to nearly instantaneously converge to its target behavior independent of the scale of the cost function, as well as its run length. Our implementation is integrated into Chips-n-Salsa, an open-source Java library for parallel and self-adaptive local search.


2015 ◽  
Vol 137 (8) ◽  
Author(s):  
Benjamin Walther ◽  
Siva Nadarajah

This paper proposes a framework for fully automatic gradient-based constrained aerodynamic shape optimization in a multirow turbomachinery environment. The concept of adjoint-based gradient calculation is discussed and the development of the discrete adjoint equations for a turbomachinery Reynolds-averaged Navier–Stokes (RANS) solver, particularly the derivation of flow-consistent adjoint boundary conditions as well as the implementation of a discrete adjoint mixing-plane formulation, are described in detail. A parallelized, automatic grid perturbation scheme utilizing radial basis functions (RBFs), which is accurate and robust as well as able to handle highly resolved complex multiblock turbomachinery grid configurations, is developed and employed to calculate the gradient from the adjoint solution. The adjoint solver is validated by comparing its sensitivities with finite-difference gradients obtained from the flow solver. A sequential quadratic programming (SQP) algorithm is then utilized to determine an improved blade shape based on the gradient information from the objective functional and the constraints. The developed optimization method is used to redesign a single-stage transonic flow compressor in both inviscid and viscous flow. The design objective is to maximize the isentropic efficiency while constraining the mass flow rate and the total pressure ratio.


2021 ◽  
Vol 11 (2) ◽  
pp. 850
Author(s):  
Dokkyun Yi ◽  
Sangmin Ji ◽  
Jieun Park

Artificial intelligence (AI) is achieved by optimizing the cost function constructed from learning data. Changing the parameters in the cost function is an AI learning process (or AI learning for convenience). If AI learning is well performed, then the value of the cost function is the global minimum. In order to obtain the well-learned AI learning, the parameter should be no change in the value of the cost function at the global minimum. One useful optimization method is the momentum method; however, the momentum method has difficulty stopping the parameter when the value of the cost function satisfies the global minimum (non-stop problem). The proposed method is based on the momentum method. In order to solve the non-stop problem of the momentum method, we use the value of the cost function to our method. Therefore, as the learning method processes, the mechanism in our method reduces the amount of change in the parameter by the effect of the value of the cost function. We verified the method through proof of convergence and numerical experiments with existing methods to ensure that the learning works well.


2020 ◽  
Vol 18 (02) ◽  
pp. 2050006 ◽  
Author(s):  
Alexsandro Oliveira Alexandrino ◽  
Carla Negri Lintzmayer ◽  
Zanoni Dias

One of the main problems in Computational Biology is to find the evolutionary distance among species. In most approaches, such distance only involves rearrangements, which are mutations that alter large pieces of the species’ genome. When we represent genomes as permutations, the problem of transforming one genome into another is equivalent to the problem of Sorting Permutations by Rearrangement Operations. The traditional approach is to consider that any rearrangement has the same probability to happen, and so, the goal is to find a minimum sequence of operations which sorts the permutation. However, studies have shown that some rearrangements are more likely to happen than others, and so a weighted approach is more realistic. In a weighted approach, the goal is to find a sequence which sorts the permutations, such that the cost of that sequence is minimum. This work introduces a new type of cost function, which is related to the amount of fragmentation caused by a rearrangement. We present some results about the lower and upper bounds for the fragmentation-weighted problems and the relation between the unweighted and the fragmentation-weighted approach. Our main results are 2-approximation algorithms for five versions of this problem involving reversals and transpositions. We also give bounds for the diameters concerning these problems and provide an improved approximation factor for simple permutations considering transpositions.


2005 ◽  
Vol 133 (6) ◽  
pp. 1710-1726 ◽  
Author(s):  
Milija Zupanski

Abstract A new ensemble-based data assimilation method, named the maximum likelihood ensemble filter (MLEF), is presented. The analysis solution maximizes the likelihood of the posterior probability distribution, obtained by minimization of a cost function that depends on a general nonlinear observation operator. The MLEF belongs to the class of deterministic ensemble filters, since no perturbed observations are employed. As in variational and ensemble data assimilation methods, the cost function is derived using a Gaussian probability density function framework. Like other ensemble data assimilation algorithms, the MLEF produces an estimate of the analysis uncertainty (e.g., analysis error covariance). In addition to the common use of ensembles in calculation of the forecast error covariance, the ensembles in MLEF are exploited to efficiently calculate the Hessian preconditioning and the gradient of the cost function. A sufficient number of iterative minimization steps is 2–3, because of superior Hessian preconditioning. The MLEF method is well suited for use with highly nonlinear observation operators, for a small additional computational cost of minimization. The consistent treatment of nonlinear observation operators through optimization is an advantage of the MLEF over other ensemble data assimilation algorithms. The cost of MLEF is comparable to the cost of existing ensemble Kalman filter algorithms. The method is directly applicable to most complex forecast models and observation operators. In this paper, the MLEF method is applied to data assimilation with the one-dimensional Korteweg–de Vries–Burgers equation. The tested observation operator is quadratic, in order to make the assimilation problem more challenging. The results illustrate the stability of the MLEF performance, as well as the benefit of the cost function minimization. The improvement is noted in terms of the rms error, as well as the analysis error covariance. The statistics of innovation vectors (observation minus forecast) also indicate a stable performance of the MLEF algorithm. Additional experiments suggest the amplified benefit of targeted observations in ensemble data assimilation.


Sign in / Sign up

Export Citation Format

Share Document