test functions
Recently Published Documents


TOTAL DOCUMENTS

576
(FIVE YEARS 161)

H-INDEX

26
(FIVE YEARS 4)

Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 131
Author(s):  
Fei Li ◽  
Wentai Guo ◽  
Xiaotong Deng ◽  
Jiamei Wang ◽  
Liangquan Ge ◽  
...  

Ensemble learning of swarm intelligence evolutionary algorithm of artificial neural network (ANN) is one of the core research directions in the field of artificial intelligence (AI). As a representative member of swarm intelligence evolutionary algorithm, shuffled frog leaping algorithm (SFLA) has the advantages of simple structure, easy implementation, short operation time, and strong global optimization ability. However, SFLA is susceptible to fall into local optimas in the face of complex and multi-dimensional symmetric function optimization, which leads to the decline of convergence accuracy. This paper proposes an improved shuffled frog leaping algorithm of threshold oscillation based on simulated annealing (SA-TO-SFLA). In this algorithm, the threshold oscillation strategy and simulated annealing strategy are introduced into the SFLA, which makes the local search behavior more diversified and the ability to escape from the local optimas stronger. By using multi-dimensional symmetric function such as drop-wave function, Schaffer function N.2, Rastrigin function, and Griewank function, two groups (i: SFLA, SA-SFLA, TO-SFLA, and SA-TO-SFLA; ii: SFLA, ISFLA, MSFLA, DSFLA, and SA-TO-SFLA) of comparative experiments are designed to analyze the convergence accuracy and convergence time. The results show that the threshold oscillation strategy has strong robustness. Moreover, compared with SFLA, the convergence accuracy of SA-TO-SFLA algorithm is significantly improved, and the median of convergence time is greatly reduced as a whole. The convergence accuracy of SFLA algorithm on these four test functions are 90%, 100%, 78%, and 92.5%, respectively, and the median of convergence time is 63.67 s, 59.71 s, 12.93 s, and 8.74 s, respectively; The convergence accuracy of SA-TO-SFLA algorithm on these four test functions is 99%, 100%, 100%, and 97.5%, respectively, and the median of convergence time is 48.64 s, 32.07 s, 24.06 s, and 3.04 s, respectively.


2022 ◽  
Vol 19 (1) ◽  
pp. 473-512
Author(s):  
Rong Zheng ◽  
◽  
Heming Jia ◽  
Laith Abualigah ◽  
Qingxin Liu ◽  
...  

<abstract> <p>Arithmetic optimization algorithm (AOA) is a newly proposed meta-heuristic method which is inspired by the arithmetic operators in mathematics. However, the AOA has the weaknesses of insufficient exploration capability and is likely to fall into local optima. To improve the searching quality of original AOA, this paper presents an improved AOA (IAOA) integrated with proposed forced switching mechanism (FSM). The enhanced algorithm uses the random math optimizer probability (<italic>RMOP</italic>) to increase the population diversity for better global search. And then the forced switching mechanism is introduced into the AOA to help the search agents jump out of the local optima. When the search agents cannot find better positions within a certain number of iterations, the proposed FSM will make them conduct the exploratory behavior. Thus the cases of being trapped into local optima can be avoided effectively. The proposed IAOA is extensively tested by twenty-three classical benchmark functions and ten CEC2020 test functions and compared with the AOA and other well-known optimization algorithms. The experimental results show that the proposed algorithm is superior to other comparative algorithms on most of the test functions. Furthermore, the test results of two training problems of multi-layer perceptron (MLP) and three classical engineering design problems also indicate that the proposed IAOA is highly effective when dealing with real-world problems.</p> </abstract>


2022 ◽  
Author(s):  
Shogo Hayashi ◽  
Junya Honda ◽  
Hisashi Kashima

AbstractBayesian optimization (BO) is an approach to optimizing an expensive-to-evaluate black-box function and sequentially determines the values of input variables to evaluate the function. However, it is expensive and in some cases becomes difficult to specify values for all input variables, for example, in outsourcing scenarios where production of input queries with many input variables involves significant cost. In this paper, we propose a novel Gaussian process bandit problem, BO with partially specified queries (BOPSQ). In BOPSQ, unlike the standard BO setting, a learner specifies only the values of some input variables, and the values of the unspecified input variables are randomly determined according to a known or unknown distribution. We propose two algorithms based on posterior sampling for cases of known and unknown input distributions. We further derive their regret bounds that are sublinear for popular kernels. We demonstrate the effectiveness of the proposed algorithms using test functions and real-world datasets.


2022 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Purshottam Narain Agrawal ◽  
Jitendra Kumar Singh

<p style='text-indent:20px;'>The aim of this paper is to study some approximation properties of the Durrmeyer variant of <inline-formula><tex-math id="M2">\begin{document}$ \alpha $\end{document}</tex-math></inline-formula>-Baskakov operators <inline-formula><tex-math id="M3">\begin{document}$ M_{n,\alpha} $\end{document}</tex-math></inline-formula> proposed by Aral and Erbay [<xref ref-type="bibr" rid="b3">3</xref>]. We study the error in the approximation by these operators in terms of the Lipschitz type maximal function and the order of approximation for these operators by means of the Ditzian-Totik modulus of smoothness. The quantitative Voronovskaja and Gr<inline-formula><tex-math id="M4">\begin{document}$ \ddot{u} $\end{document}</tex-math></inline-formula>ss Voronovskaja type theorems are also established. Next, we modify these operators in order to preserve the test functions <inline-formula><tex-math id="M5">\begin{document}$ e_0 $\end{document}</tex-math></inline-formula> and <inline-formula><tex-math id="M6">\begin{document}$ e_2 $\end{document}</tex-math></inline-formula> and show that the modified operators give a better rate of convergence. Finally, we present some graphs to illustrate the convergence behaviour of the operators <inline-formula><tex-math id="M7">\begin{document}$ M_{n,\alpha} $\end{document}</tex-math></inline-formula> and show the comparison of its rate of approximation vis-a-vis the modified operators.</p>


Author(s):  
Владислав Иванович Заботин ◽  
Павел Андреевич Чернышевский

В работах R.J. Vanderbei доказано, что непрерывная на выпуклом компактном множестве функция обладает свойством $\varepsilon $-липшицевости, обобщающим классическое понятие липшицевости. На основе этого свойства R.J. Vanderbei предложено одно обобщение метода Пиявского поиска глобального минимума непрерывной на отрезке функции. В данной работе предлагаются одна модификация этого метода для положительной $\varepsilon $-константы и одна модификация для положительной $\varepsilon $-константы и условия останова, не зависящего от выбора $\varepsilon $. Доказана сходимость предлагаемых алгоритмов, приведены результаты численных экспериментов на основе применения разработанной программы. Данные методы могут быть применены для оптимизации любых непрерывных на отрезке функций, например, при решении некоторых обратных задачах баллистики и в экономике в прямых задачах потребительского выбора маршаллианского типа с переменными ценами благ и с непрерывной функцией полезности. R.J. Vanderbei in his works proves that any continuous on a compact set function has the $\varepsilon $-Lipschitz property which extends conventional Lipschitz continuity. Based on this feature Vanderbei proposed one extension of Piyavskii’s global optimization algorithm to the continuous function case. In this paper we propose one modification of the Vanderbei’s algorithm for a positive $\varepsilon $-constant and another modification for a positive $\varepsilon $-constant and $\varepsilon $ value independent termination condition. We prove proposed methods convergence and perform several computational experiments with designed software for known test functions.


2021 ◽  
pp. 1-14
Author(s):  
Feng Xue ◽  
Yongbo Liu ◽  
Xiaochen Ma ◽  
Bharat Pathak ◽  
Peng Liang

To solve the problem that the K-means algorithm is sensitive to the initial clustering centers and easily falls into local optima, we propose a new hybrid clustering algorithm called the IGWOKHM algorithm. In this paper, we first propose an improved strategy based on a nonlinear convergence factor, an inertial step size, and a dynamic weight to improve the search ability of the traditional grey wolf optimization (GWO) algorithm. Then, the improved GWO (IGWO) algorithm and the K-harmonic means (KHM) algorithm are fused to solve the clustering problem. This fusion clustering algorithm is called IGWOKHM, and it combines the global search ability of IGWO with the local fast optimization ability of KHM to both solve the problem of the K-means algorithm’s sensitivity to the initial clustering centers and address the shortcomings of KHM. The experimental results on 8 test functions and 4 University of California Irvine (UCI) datasets show that the IGWO algorithm greatly improves the efficiency of the model while ensuring the stability of the algorithm. The fusion clustering algorithm can effectively overcome the inadequacies of the K-means algorithm and has a good global optimization ability.


Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3275
Author(s):  
Qing-Bo Cai ◽  
Khursheed J. Ansari ◽  
Fuat Usta

The topic of approximation with positive linear operators in contemporary functional analysis and theory of functions has emerged in the last century. One of these operators is Meyer–König and Zeller operators and in this study a generalization of Meyer–König and Zeller type operators based on a function τ by using two sequences of functions will be presented. The most significant point is that the newly introduced operator preserves {1,τ,τ2} instead of classical Korovkin test functions. Then asymptotic type formula, quantitative results, and local approximation properties of the introduced operators are given. Finally a numerical example performed by MATLAB is given to visualize the provided theoretical results.


2021 ◽  
Author(s):  
Liyancang Li ◽  
Wuyue Yue Wu

Abstract Antlion optimization algorithm has good search and development capabilities, but the influence weight of elite ant lions is reduced in the later stage of optimization, which leads to slower algorithm convergence and easy to fall into local optimization. For this purpose, an antlion optimization algorithm based on immune cloning was proposed. In the early stage, the reverse learning strategy was used to initialize the ant population. The Cauchy mutation operator was added to the elite antlion update to improve the later development ability of the algorithm; finally, the antlion was cloned and mutated with the immune clone selection algorithm to change the position and fitness value of the antlion, and further improve the algorithm's global optimization ability and convergence accuracy. 10 test functions and a 0~1 backpack were used to evaluate the optimization ability of the algorithm and applied to the size and layout optimization problems of the truss structure. The optimization effect was found to be good through the force effect diagram. It is verified that ICALO is applied to combinatorial optimization problems with faster convergence speed and higher accuracy. It provides a new method for structural optimization.This article is submitted as original content. The authors declare that they have no competing interests.


2021 ◽  
Vol 5 (4) ◽  
pp. 258
Author(s):  
Areej Bin Sultan ◽  
Mohamed Jleli ◽  
Bessem Samet

We first consider the damped wave inequality ∂2u∂t2−∂2u∂x2+∂u∂t≥xσ|u|p,t>0,x∈(0,L), where L>0, σ∈R, and p>1, under the Dirichlet boundary conditions (u(t,0),u(t,L))=(f(t),g(t)),t>0. We establish sufficient conditions depending on σ, p, the initial conditions, and the boundary conditions, under which the considered problem admits no global solution. Two cases of boundary conditions are investigated: g≡0 and g(t)=tγ, γ>−1. Next, we extend our study to the time-fractional analogue of the above problem, namely, the time-fractional damped wave inequality ∂αu∂tα−∂2u∂x2+∂βu∂tβ≥xσ|u|p,t>0,x∈(0,L), where α∈(1,2), β∈(0,1), and ∂τ∂tτ is the time-Caputo fractional derivative of order τ, τ∈{α,β}. Our approach is based on the test function method. Namely, a judicious choice of test functions is made, taking in consideration the boundedness of the domain and the boundary conditions. Comparing with previous existing results in the literature, our results hold without assuming that the initial values are large with respect to a certain norm.


Sign in / Sign up

Export Citation Format

Share Document