Error Analysis of Optimization Algorithms in Ultrasonic Parameter Estimation

Author(s):  
N. Ram Aditya ◽  
K. Sri Abhijeeth ◽  
Anuraj K. ◽  
Poorna S.S.
Author(s):  
TechniqueAbdelhady Ramadan ◽  
◽  
Salah Kamel ◽  
Nabil Neggaz ◽  
Ali S. Alghamdi ◽  
...  

Nowadays all the world does its best to develop the power generation systems that depend on nature in order to reduce the dependence on fuel. Photovoltaic (PV) systems are considered one of the most important renewable energy resources. Scientific research has gained a high interest, especially in PV cell modeling and parameter estimation. The estimation of optimum parameters for the PV model has been considered the main target of the paper optimization problem. Equilibrium optimization (EO) algorithm is considered one of optimization algorithms inspired from nature physical phenomena. EO algorithm has been inspired from the nature physical process of controlling mass balance through specific volume until reaching equilibrium state. In this paper, an EO algorithm has been proposed and applied to prepare a mathematical model for photovoltaic solar cell. The challenge in this optimization problem is the non-linearity in PV solar cell characteristic. The EO algorithm has been evaluated through the following items. EO has been applied to estimate the parameters of different PV models such as single, double and triple PV models, which have different complexity. Applying the previous item for real PV application. The obtained results have been compared though different functions such as root mean square value and absolute mean error. In all cases, EO obtained results have been compared with the more recent optimization algorithms such as Particle swarm optimization (PSO), Teaching learn Based Optimization (TLBO) and Harries Hawk optimization (HHO). From the all obtained results, EO algorithm gives more accurate PV models in comparison with other optimization algorithms.


Energies ◽  
2021 ◽  
Vol 14 (21) ◽  
pp. 7115
Author(s):  
Mohamed Abdel-Basset ◽  
Reda Mohamed ◽  
Victor Chang

The proton exchange membrane fuel cell (PEMFC) is a favorable renewable energy source to overcome environmental pollution and save electricity. However, the mathematical model of the PEMFC contains some unknown parameters which have to be accurately estimated to build an accurate PEMFC model; this problem is known as the parameter estimation of PEMFC and belongs to the optimization problem. Although this problem belongs to the optimization problem, not all optimization algorithms are suitable to solve it because it is a nonlinear and complex problem. Therefore, in this paper, a new optimization algorithm known as the artificial gorilla troops optimizer (GTO), which simulates the collective intelligence of gorilla troops in nature, is adapted for estimating this problem. However, the GTO is suffering from local optima and low convergence speed problems, so a modification based on replacing its exploitation operator with a new one, relating the exploration and exploitation according to the population diversity in the current iteration, has been performed to improve the exploitation operator in addition to the exploration one. This modified variant, named the modified GTO (MGTO), has been applied for estimating the unknown parameters of three PEMFC stacks, 250 W stack, BCS-500W stack, and SR-12 stack, used widely in the literature, based on minimizing the error between the measured and estimated data points as the objective function. The outcomes obtained by applying the GTO and MGTO on those PEMFC stacks have been extensively compared with those of eight well-known optimization algorithms using various performance analyses, best, average, worst, standard deviation (SD), CPU time, mean absolute percentage error (MAPE), and mean absolute error (MAE), in addition to the Wilcoxon rank-sum test, to show which one is the best for solving this problem. The experimental findings show that MGTO is the best for all performance metrics, but CPU time is competitive among all algorithms.


Author(s):  
Arnulf Jentzen ◽  
Benno Kuckuck ◽  
Ariel Neufeld ◽  
Philippe von Wurstemberger

Abstract Stochastic gradient descent (SGD) optimization algorithms are key ingredients in a series of machine learning applications. In this article we perform a rigorous strong error analysis for SGD optimization algorithms. In particular, we prove for every arbitrarily small $\varepsilon \in (0,\infty )$ and every arbitrarily large $p{\,\in\,} (0,\infty )$ that the considered SGD optimization algorithm converges in the strong $L^p$-sense with order $1/2-\varepsilon $ to the global minimum of the objective function of the considered stochastic optimization problem under standard convexity-type assumptions on the objective function and relaxed assumptions on the moments of the stochastic errors appearing in the employed SGD optimization algorithm. The key ideas in our convergence proof are, first, to employ techniques from the theory of Lyapunov-type functions for dynamical systems to develop a general convergence machinery for SGD optimization algorithms based on such functions, then, to apply this general machinery to concrete Lyapunov-type functions with polynomial structures and, thereafter, to perform an induction argument along the powers appearing in the Lyapunov-type functions in order to achieve for every arbitrarily large $ p \in (0,\infty ) $ strong $ L^p $-convergence rates.


2013 ◽  
Vol 13 (5) ◽  
pp. 2205-2214 ◽  
Author(s):  
Simoní Da Ros ◽  
Gabriel Colusso ◽  
Thiago A. Weschenfelder ◽  
Lisiane de Marsillac Terra ◽  
Fernanda de Castilhos ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document