scholarly journals COMPARISON OF GLOBAL SEARCH OPTIMIZATION ALGORITHMS TO PERFORM THE VELOCITY ANALYSIS WITH A CONVERTED WAVE MOVEOUT EQUATION

Author(s):  
Nelson Ricardo Coelho Flores Zuniga

Even with previous works having studied about the accuracy and objective function of several nonhyperbolic multiparametric travel-time approximations for velocity analysis, they lack tests concerning different optimization algorithms and how they influence the accuracy and processing time. Once many approximations were tested and found the multimodal one which presented the best accuracy results, it is possible to perform a velocity analysis with different global search optimization algorithms. The minimization of the curve calculated with the converted wave moveout equation to the observed curve can be done for each optimization algorithm selected in this work. The travel-time curves tested here are the PP and PS reflection events coming from the interface of the top of an offshore ultra-deep reservoir. After the inversion routine have been performed, it is possible to define the processing time and the accuracy of each optimization algorithm for this kind of problem.

2019 ◽  
Vol 7 (4) ◽  
pp. 321-328
Author(s):  
Nelson Ricardo Flores Zuniga

In the last decade, many works compared nonhyperbolic multiparametric travel-time approximations to perform velocity analysis. In these works, some analyses were accomplished, such as accuracy analysis and objective function analysis. However, no previous works compared the optimization algorithms to perform the inversion procedure concerning the processing time and the accuracy of each algorithm. As the shifted hyperbola showed the best results among the unimodal approximations in previous works, it was selected to be used in a comparison with five local search optimization algorithms. Each algorithm was compared concerning the accuracy by the minimization of the calculated curve to the observed curve. The travel-time curves tested here are conventional (PP) and converted wave (PS) reflection events from an offshore model. With this set of tests, it is possible to define which optimization algorithm presents the most reliable result when used with the shifted hyperbola equation concerning the processing time and the accuracy.


2018 ◽  
Vol 36 (4) ◽  
pp. 1 ◽  
Author(s):  
Nelson Ricardo Coelho Flores Zuniga ◽  
Fernando Brenha Ribeiro ◽  
Viatcheslav Ivanovich Priimenko

The velocity analysis is an important step for the seismic processing. With the increase in the difficulty of work with specific conditions and complex geological structures, more complex travel-time approximations are developed to perform a better structural characterization. As the complexity increases, more approximations were developed aiming to characterize different factor responsible for the nonhyperbolicity, and using sometimes an additional parameter. Many nonhyperbolic multiparametric travel-time approximations were developed and their complexities vary strongly from one to other. In previous works, it was observed that some approximations present a unimodal behavior while other approximations showed a multimodal behavior. However, a specific kind of approximation showed both statistical distributions, the unimodal and the multimodal in distinct situations. To find out the factor responsible for this variation in the probabilistic distribution behavior of this kind of approximation, it was necessary to test different theoretical models to understand which variation in the structure interferes in the topography of the objective function.


Author(s):  
Arnulf Jentzen ◽  
Benno Kuckuck ◽  
Ariel Neufeld ◽  
Philippe von Wurstemberger

Abstract Stochastic gradient descent (SGD) optimization algorithms are key ingredients in a series of machine learning applications. In this article we perform a rigorous strong error analysis for SGD optimization algorithms. In particular, we prove for every arbitrarily small $\varepsilon \in (0,\infty )$ and every arbitrarily large $p{\,\in\,} (0,\infty )$ that the considered SGD optimization algorithm converges in the strong $L^p$-sense with order $1/2-\varepsilon $ to the global minimum of the objective function of the considered stochastic optimization problem under standard convexity-type assumptions on the objective function and relaxed assumptions on the moments of the stochastic errors appearing in the employed SGD optimization algorithm. The key ideas in our convergence proof are, first, to employ techniques from the theory of Lyapunov-type functions for dynamical systems to develop a general convergence machinery for SGD optimization algorithms based on such functions, then, to apply this general machinery to concrete Lyapunov-type functions with polynomial structures and, thereafter, to perform an induction argument along the powers appearing in the Lyapunov-type functions in order to achieve for every arbitrarily large $ p \in (0,\infty ) $ strong $ L^p $-convergence rates.


1993 ◽  
Vol 115 (4) ◽  
pp. 978-987 ◽  
Author(s):  
K. Kurien Issac

This paper describes a nondifferentiable optimization (NDO) algorithm for solving constrained minimax linkage synthesis. Use of a proper characterization of minima makes the algorithm superior to the smooth optimization algorithms for minimax linkage synthesis and the concept of following the curved ravines of the objective function makes it very effective. The results obtained are superior to some of the reported solutions and demonstrate the algorithm’s ability to consistently arrive at actual minima from widely separated starting points. The results indicate that Chebyshev’s characterization is not a necessary condition for minimax linkages, while the characterization used in the algorithm is a proper necessary condition.


Author(s):  
Łukasz Knypiński

Purpose The purpose of this paper is to execute the efficiency analysis of the selected metaheuristic algorithms (MAs) based on the investigation of analytical functions and investigation optimization processes for permanent magnet motor. Design/methodology/approach A comparative performance analysis was conducted for selected MAs. Optimization calculations were performed for as follows: genetic algorithm (GA), particle swarm optimization algorithm (PSO), bat algorithm, cuckoo search algorithm (CS) and only best individual algorithm (OBI). All of the optimization algorithms were developed as computer scripts. Next, all optimization procedures were applied to search the optimal of the line-start permanent magnet synchronous by the use of the multi-objective objective function. Findings The research results show, that the best statistical efficiency (mean objective function and standard deviation [SD]) is obtained for PSO and CS algorithms. While the best results for several runs are obtained for PSO and GA. The type of the optimization algorithm should be selected taking into account the duration of the single optimization process. In the case of time-consuming processes, algorithms with low SD should be used. Originality/value The new proposed simple nondeterministic algorithm can be also applied for simple optimization calculations. On the basis of the presented simulation results, it is possible to determine the quality of the compared MAs.


2019 ◽  
Vol 37 (2) ◽  
Author(s):  
Nelson Ricardo Flores Zuniga ◽  
Fernando Brenha Ribeiro ◽  
Viacheslav Ivanovich Priimenko

ABSTRACT. Several nonhyperbolic multiparametric travel-time approximations were tested to perform the velocity analysis in the last decade. The previous works studied not only the accuracy but also the complexity concerning the topology of the objective function and the statistical distribution. However, the variation of the norm was poorly studied. As some approximations presented very good results, it is important to understand the behavior of the application of the L1-norm rather than the L2-norm. Therefore, it was selected an approximation which showed the best set of results so far. Thus, this approximation was compared to the L2- and L1-norm aiming to observe its behavior for a PP and a PS reflection event. With this set of information, it is possible to evaluate what kind of improvement the L1-norm can bring for this kind of analysis.Keywords: objective function, nonhyperbolic, probability distribution.RESUMO. Diversas aproximações não-hiperbólicas multiparamétricas de tempos de trânsito foram testadas para realizar a análise de velocidades na última década. Trabalhos anteriores estudaram não apenas a precisão, mas também a complexidade de topologia da função objetivo e a distribuição estatística destas aproximações. Entretanto, a variação de norma foi pouco estudada. Como algumas aproximações apresentaram resultados muito bons, é importante entender o comportamento da aplicação da norma L1 ao invés da L2. Portanto, foi selecionada uma aproximação que apresentou o melhor conjunto de resultados até o momento. Dessa forma, esta aproximação foi comparada com as normas L2 e L1, visando observar seu comportamento para eventos de reflexão PP e PS. Com esse conjunto de informações, é possível avaliar que tipo de melhoria a aplicação da norma L1 pode trazer para este tipo de análise.Palavras-chave: função objetivo, não-hiperbólica, distribuição probabilística.


Author(s):  
YUPING WANG

In this paper, we propose a uniform enhancement approach called smoothing function method, which can cooperate any optimization algorithm and improve its performance. The method has two phases. In the first phase, a smoothing function is constructed by using a properly truncated Fourier series. It can preserve the overall shape of the original objective function but eliminate many of its local optimal points, thus it can well approach the objective function. Then, the optimal solution of the smoothing function is searched by an optimization algorithm (e.g. traditional algorithm or evolutionary algorithm) so that the search becomes much easier. In the second phase, we switch to optimize the original function for some iterations by using the best solution(s) obtained in phase 1 as an initial point (population). Thereafter, the smoothing function is updated in order to approximate the original function more accurately. These two phases are repeated until the best solutions obtained in several successively second phases cannot be improved obviously. In this manner, any optimization algorithm will become much easier in searching optimal solution. Finally, we use the proposed approach to enhance two typical optimization algorithms: Powell direct algorithm and a simple genetic algorithm. The simulation results on ten challenging benchmarks indicate the proposed approach can effectively improve the performance of these two algorithms.


2021 ◽  
Vol 11 (10) ◽  
pp. 4382
Author(s):  
Ali Sadeghi ◽  
Sajjad Amiri Doumari ◽  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Pavel Trojovský ◽  
...  

Optimization is the science that presents a solution among the available solutions considering an optimization problem’s limitations. Optimization algorithms have been introduced as efficient tools for solving optimization problems. These algorithms are designed based on various natural phenomena, behavior, the lifestyle of living beings, physical laws, rules of games, etc. In this paper, a new optimization algorithm called the good and bad groups-based optimizer (GBGBO) is introduced to solve various optimization problems. In GBGBO, population members update under the influence of two groups named the good group and the bad group. The good group consists of a certain number of the population members with better fitness function than other members and the bad group consists of a number of the population members with worse fitness function than other members of the population. GBGBO is mathematically modeled and its performance in solving optimization problems was tested on a set of twenty-three different objective functions. In addition, for further analysis, the results obtained from the proposed algorithm were compared with eight optimization algorithms: genetic algorithm (GA), particle swarm optimization (PSO), gravitational search algorithm (GSA), teaching–learning-based optimization (TLBO), gray wolf optimizer (GWO), and the whale optimization algorithm (WOA), tunicate swarm algorithm (TSA), and marine predators algorithm (MPA). The results show that the proposed GBGBO algorithm has a good ability to solve various optimization problems and is more competitive than other similar algorithms.


2021 ◽  
Vol 104 (2) ◽  
pp. 003685042110254
Author(s):  
Armaghan Mohsin ◽  
Yazan Alsmadi ◽  
Ali Arshad Uppal ◽  
Sardar Muhammad Gulfam

In this paper, a novel modified optimization algorithm is presented, which combines Nelder-Mead (NM) method with a gradient-based approach. The well-known Nelder Mead optimization technique is widely used but it suffers from convergence issues in higher dimensional complex problems. Unlike the NM, in this proposed technique we have focused on two issues of the NM approach, one is shape of the simplex which is reshaped at each iteration according to the objective function, so we used a fixed shape of the simplex and we regenerate the simplex at each iteration and the second issue is related to reflection and expansion steps of the NM technique in each iteration, NM used fixed value of [Formula: see text], that is, [Formula: see text]  = 1 for reflection and [Formula: see text]  = 2 for expansion and replace the worst point of the simplex with that new point in each iteration. In this way NM search the optimum point. In proposed algorithm the optimum value of the parameter [Formula: see text] is computed and then centroid of new simplex is originated at this optimum point and regenerate the simplex with this centroid in each iteration that optimum value of [Formula: see text] will ensure the fast convergence of the proposed technique. The proposed algorithm has been applied to the real time implementation of the transversal adaptive filter. The application used to demonstrate the performance of the proposed technique is a well-known convex optimization problem having quadratic cost function, and results show that the proposed technique shows fast convergence than the Nelder-Mead method for lower dimension problems and the proposed technique has also good convergence for higher dimensions, that is, for higher filter taps problem. The proposed technique has also been compared with stochastic techniques like LMS and NLMS (benchmark) techniques. The proposed technique shows good results against LMS. The comparison shows that the modified algorithm guarantees quite acceptable convergence with improved accuracy for higher dimensional identification problems.


Sign in / Sign up

Export Citation Format

Share Document