scholarly journals Image Registration with Automatic Computation of Gradients

2008 ◽  
Author(s):  
Eli Kahn ◽  
Lawrence Staib

Many image registration algorithms are formulated as optimization problems with a gradient descent based solver, One difficulty with designing and implementing such methods is in the implementation of the gradient computation. This process can be time-consuming and error-prone. In addition some functions do not have gradients that can be expressed in symbolic form. Automatic differentiation is useful for computing gradients of complicated objective functions. It moves the burden of computing gradients from the programmer to the computer. So far, AD has not been exploited for use in image registration. This paper describes a software library the authors have developed to automate the process of computing gradients of registration objective functions. This can alleviate the job of registration designers somewhat and potentially make it easier to design better registration algorithms.

2019 ◽  
Vol 485 (1) ◽  
pp. 15-18
Author(s):  
S. V. Guminov ◽  
Yu. E. Nesterov ◽  
P. E. Dvurechensky ◽  
A. V. Gasnikov

In this paper a new variant of accelerated gradient descent is proposed. The proposed method does not require any information about the objective function, uses exact line search for the practical accelerations of convergence, converges according to the well-known lower bounds for both convex and non-convex objective functions and possesses primal-dual properties. We also provide a universal version of said method, which converges according to the known lower bounds for both smooth and non-smooth problems.


Author(s):  
Pengfei (Taylor) Li ◽  
Peirong (Slade) Wang ◽  
Farzana Chowdhury ◽  
Li Zhang

Traditional formulations for transportation optimization problems mostly build complicating attributes into constraints while keeping the succinctness of objective functions. A popular solution is the Lagrangian decomposition by relaxing complicating constraints and then solving iteratively. Although this approach is effective for many problems, it generates intractability in other problems. To address this issue, this paper presents an alternative formulation for transportation optimization problems in which the complicating attributes of target problems are partially or entirely built into the objective function instead of into the constraints. Many mathematical complicating constraints in transportation problems can be efficiently modeled in dynamic network loading (DNL) models based on the demand–supply equilibrium, such as the various road or vehicle capacity constraints or “IF–THEN” type constraints. After “pre-building” complicating constraints into the objective functions, the objective function can be approximated well with customized high-fidelity DNL models. Three types of computing benefits can be achieved in the alternative formulation: ( a) the original problem will be kept the same; ( b) computing complexity of the new formulation may be significantly reduced because of the disappearance of hard constraints; ( c) efficiency loss on the objective function side can be mitigated via multiple high-performance computing techniques. Under this new framework, high-fidelity and problem-specific DNL models will be critical to maintain the attributes of original problems. Therefore, the authors’ recent efforts in enhancing the DNL’s fidelity and computing efficiency are also described in the second part of this paper. Finally, a demonstration case study is conducted to validate the new approach.


2021 ◽  
Vol 26 (2) ◽  
pp. 27
Author(s):  
Alejandro Castellanos-Alvarez ◽  
Laura Cruz-Reyes ◽  
Eduardo Fernandez ◽  
Nelson Rangel-Valdez ◽  
Claudia Gómez-Santillán ◽  
...  

Most real-world problems require the optimization of multiple objective functions simultaneously, which can conflict with each other. The environment of these problems usually involves imprecise information derived from inaccurate measurements or the variability in decision-makers’ (DMs’) judgments and beliefs, which can lead to unsatisfactory solutions. The imperfect knowledge can be present either in objective functions, restrictions, or decision-maker’s preferences. These optimization problems have been solved using various techniques such as multi-objective evolutionary algorithms (MOEAs). This paper proposes a new MOEA called NSGA-III-P (non-nominated sorting genetic algorithm III with preferences). The main characteristic of NSGA-III-P is an ordinal multi-criteria classification method for preference integration to guide the algorithm to the region of interest given by the decision-maker’s preferences. Besides, the use of interval analysis allows the expression of preferences with imprecision. The experiments contrasted several versions of the proposed method with the original NSGA-III to analyze different selective pressure induced by the DM’s preferences. In these experiments, the algorithms solved three-objectives instances of the DTLZ problem. The obtained results showed a better approximation to the region of interest for a DM when its preferences are considered.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yaoxin Li ◽  
Jing Liu ◽  
Guozheng Lin ◽  
Yueyuan Hou ◽  
Muyun Mou ◽  
...  

AbstractIn computer science, there exist a large number of optimization problems defined on graphs, that is to find a best node state configuration or a network structure, such that the designed objective function is optimized under some constraints. However, these problems are notorious for their hardness to solve, because most of them are NP-hard or NP-complete. Although traditional general methods such as simulated annealing (SA), genetic algorithms (GA), and so forth have been devised to these hard problems, their accuracy and time consumption are not satisfying in practice. In this work, we proposed a simple, fast, and general algorithm framework based on advanced automatic differentiation technique empowered by deep learning frameworks. By introducing Gumbel-softmax technique, we can optimize the objective function directly by gradient descent algorithm regardless of the discrete nature of variables. We also introduce evolution strategy to parallel version of our algorithm. We test our algorithm on four representative optimization problems on graph including modularity optimization from network science, Sherrington–Kirkpatrick (SK) model from statistical physics, maximum independent set (MIS) and minimum vertex cover (MVC) problem from combinatorial optimization on graph, and Influence Maximization problem from computational social science. High-quality solutions can be obtained with much less time-consuming compared to the traditional approaches.


Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. R767-R781 ◽  
Author(s):  
Mattia Aleardi ◽  
Silvio Pierini ◽  
Angelo Sajeva

We have compared the performances of six recently developed global optimization algorithms: imperialist competitive algorithm, firefly algorithm (FA), water cycle algorithm (WCA), whale optimization algorithm (WOA), fireworks algorithm (FWA), and quantum particle swarm optimization (QPSO). These methods have been introduced in the past few years and have found very limited or no applications to geophysical exploration problems thus far. We benchmark the algorithms’ results against the particle swarm optimization (PSO), which is a popular and well-established global search method. In particular, we are interested in assessing the exploration and exploitation capabilities of each method as the dimension of the model space increases. First, we test the different algorithms on two multiminima and two convex analytic objective functions. Then, we compare them using the residual statics corrections and 1D elastic full-waveform inversion, which are highly nonlinear geophysical optimization problems. Our results demonstrate that FA, FWA, and WOA are characterized by optimal exploration capabilities because they outperform the other approaches in the case of optimization problems with multiminima objective functions. Differently, QPSO and PSO have good exploitation capabilities because they easily solve ill-conditioned optimizations characterized by a nearly flat valley in the objective function. QPSO, PSO, and WCA offer a good compromise between exploitation and exploration.


2021 ◽  
Vol 11 (5) ◽  
pp. 2042
Author(s):  
Hadi Givi ◽  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Ruben Morales-Menendez ◽  
Ricardo A. Ramirez-Mendoza ◽  
...  

Optimization problems in various fields of science and engineering should be solved using appropriate methods. Stochastic search-based optimization algorithms are a widely used approach for solving optimization problems. In this paper, a new optimization algorithm called “the good, the bad, and the ugly” optimizer (GBUO) is introduced, based on the effect of three members of the population on the population updates. In the proposed GBUO, the algorithm population moves towards the good member and avoids the bad member. In the proposed algorithm, a new member called ugly member is also introduced, which plays an essential role in updating the population. In a challenging move, the ugly member leads the population to situations contrary to society’s movement. GBUO is mathematically modeled, and its equations are presented. GBUO is implemented on a set of twenty-three standard objective functions to evaluate the proposed optimizer’s performance for solving optimization problems. The mentioned standard objective functions can be classified into three groups: unimodal, multimodal with high-dimension, and multimodal with fixed dimension functions. There was a further analysis carried-out for eight well-known optimization algorithms. The simulation results show that the proposed algorithm has a good performance in solving different optimization problems models and is superior to the mentioned optimization algorithms.


2014 ◽  
Vol 984-985 ◽  
pp. 419-424
Author(s):  
P. Sabarinath ◽  
M.R. Thansekhar ◽  
R. Saravanan

Arriving optimal solutions is one of the important tasks in engineering design. Many real-world design optimization problems involve multiple conflicting objectives. The design variables are of continuous or discrete in nature. In general, for solving Multi Objective Optimization methods weight method is preferred. In this method, all the objective functions are converted into a single objective function by assigning suitable weights to each objective functions. The main drawback lies in the selection of proper weights. Recently, evolutionary algorithms are used to find the nondominated optimal solutions called as Pareto optimal front in a single run. In recent years, Non-dominated Sorting Genetic Algorithm II (NSGA-II) finds increasing applications in solving multi objective problems comprising of conflicting objectives because of low computational requirements, elitism and parameter-less sharing approach. In this work, we propose a methodology which integrates NSGA-II and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) for solving a two bar truss problem. NSGA-II searches for the Pareto set where two bar truss is evaluated in terms of minimizing the weight of the truss and minimizing the total displacement of the joint under the given load. Subsequently, TOPSIS selects the best compromise solution.


2018 ◽  
Vol 30 (7) ◽  
pp. 2005-2023 ◽  
Author(s):  
Tomoumi Takase ◽  
Satoshi Oyama ◽  
Masahito Kurihara

We present a comprehensive framework of search methods, such as simulated annealing and batch training, for solving nonconvex optimization problems. These methods search a wider range by gradually decreasing the randomness added to the standard gradient descent method. The formulation that we define on the basis of this framework can be directly applied to neural network training. This produces an effective approach that gradually increases batch size during training. We also explain why large batch training degrades generalization performance, which previous studies have not clarified.


2021 ◽  
Author(s):  
Tianyi Liu ◽  
Zhehui Chen ◽  
Enlu Zhou ◽  
Tuo Zhao

Momentum stochastic gradient descent (MSGD) algorithm has been widely applied to many nonconvex optimization problems in machine learning (e.g., training deep neural networks, variational Bayesian inference, etc.). Despite its empirical success, there is still a lack of theoretical understanding of convergence properties of MSGD. To fill this gap, we propose to analyze the algorithmic behavior of MSGD by diffusion approximations for nonconvex optimization problems with strict saddle points and isolated local optima. Our study shows that the momentum helps escape from saddle points but hurts the convergence within the neighborhood of optima (if without the step size annealing or momentum annealing). Our theoretical discovery partially corroborates the empirical success of MSGD in training deep neural networks.


Sign in / Sign up

Export Citation Format

Share Document