convergence factor
Recently Published Documents


TOTAL DOCUMENTS

73
(FIVE YEARS 16)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
pp. 1-14
Author(s):  
Feng Xue ◽  
Yongbo Liu ◽  
Xiaochen Ma ◽  
Bharat Pathak ◽  
Peng Liang

To solve the problem that the K-means algorithm is sensitive to the initial clustering centers and easily falls into local optima, we propose a new hybrid clustering algorithm called the IGWOKHM algorithm. In this paper, we first propose an improved strategy based on a nonlinear convergence factor, an inertial step size, and a dynamic weight to improve the search ability of the traditional grey wolf optimization (GWO) algorithm. Then, the improved GWO (IGWO) algorithm and the K-harmonic means (KHM) algorithm are fused to solve the clustering problem. This fusion clustering algorithm is called IGWOKHM, and it combines the global search ability of IGWO with the local fast optimization ability of KHM to both solve the problem of the K-means algorithm’s sensitivity to the initial clustering centers and address the shortcomings of KHM. The experimental results on 8 test functions and 4 University of California Irvine (UCI) datasets show that the IGWO algorithm greatly improves the efficiency of the model while ensuring the stability of the algorithm. The fusion clustering algorithm can effectively overcome the inadequacies of the K-means algorithm and has a good global optimization ability.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Jutao Zhao ◽  
Pengfei Guo

The Jacobi–Davidson iteration method is very efficient in solving Hermitian eigenvalue problems. If the correction equation involved in the Jacobi–Davidson iteration is solved accurately, the simplified Jacobi–Davidson iteration is equivalent to the Rayleigh quotient iteration which achieves cubic convergence rate locally. When the involved linear system is solved by an iteration method, these two methods are also equivalent. In this paper, we present the convergence analysis of the simplified Jacobi–Davidson method and present the estimate of iteration numbers of the inner correction equation. Furthermore, based on the convergence factor, we can see how the accuracy of the inner iteration controls the outer iteration.


電腦學刊 ◽  
2021 ◽  
Vol 32 (5) ◽  
pp. 148-160
Author(s):  
Cheng Zhu Cheng Zhu ◽  
Xu-Hua Pan Cheng Zhu ◽  
Qi Chen Xu-Hua Pan ◽  
Yong Zhang Qi Chen ◽  
Xin-Yi Gao Yong Zhang


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Rao M. C. Karthik ◽  
Rashmi L. Malghan ◽  
Fuat Kara ◽  
Arunkumar Shettigar ◽  
Shrikantha S. Rao ◽  
...  

The paper aims to investigate the processing execution of SS316 in manageable machining cooling ways such as dry, wet, and cryogenic (LN2-liquid nitrogen). Furthermore, “one parametric approach” was utilized to study the influence and carry out the comparative analysis of LN2over dry and LN2over wet machining conditions. Response surface methodology (RSM) is incorporated to build a relationship model among the considered independent variables (spindle speed: (S, rpm), feed rate (F, mm/min), and depth of cut (doc) (D, mm)) and the dependent variable (surface roughness (Ra)). Since there is the involvement of more than one independent variable, the generation of regression equation is “multiple linear regression.” Based on the attained coefficient value of the independent variable, the respective impact on surface roughness is identified. The results of comparative analysis of LN2over dry and LN2over wet machining states revealed that LN2 machining yielded better surface finish with up to 64.9%, 54.9% over dry and wet machining, respectively, indicating the benefits of LN2 for achieving better Ra. The benchmark function of the proposed mode hybrid-bias (BNN-SVR) algorithm showcases the propensity to emerge out of the local minimum and coincide with the optimal target value. The performance of the (BNN-SVR) is a prevalent new ability to fetch the partially trained weights from the BNN model into the SVR model, thus leading to the conversion of static learning capability to dynamic capability. The performances of the adopted prediction approaches are compared through a range of attained error deviation, i.e., (RA: 3.95%–8.43%), (BNN: 2.36%–5.88%), (SVR: 1.04%–3.61%), respectively. Hybrid-bias (BNN-SVR) is the best suitable prediction model as it provides significant evidence by attaining less error in predicting Ra. However, SVR surpasses BNN and RSM approaches because of the convergence factor and narrow margin error.


2021 ◽  
Author(s):  
Ji Zhang ◽  
Kai Yang ◽  
jiesheng wang

Abstract Whale Optimization Algorithm (WOA) is a swarm intelligence algorithm inspired by whale hunting behavior. Aiming at the defect that the spiral update mechanism in WOA may exceed the search range, three different spiral searching strategies are first proposed. The agents search with a more reasonable and broader route distribution so as to improve population diversity and traversal. Secondly, an improved sine cosine operator based on the convergence factor was proposed to improve the search efficiency of WOA, where sine search is used for global exploration and cosine search is used for local exploitation. The proposed convergence factor enables search agents to adaptively balance the exploration and exploitation phases with iterations. In the simulation experiment, the effectiveness of three spiral search strategies and sine cosine operator is verified. Then, the whale optimization algorithm (WOA), salp swarm algorithm (SSA), firefly algorithm (FA), moth-flame optimization (MFO) algorithm, fireworks algorithm (FWA), sine cosine algorithm (SCA) and improved WOA are selected for comparison experiments. Finally, the improved WOA is applied to two engineering problems (three-bar truss design problem and the welded beam optimization problem). The experimental results show that compared with other optimization algorithms, the improved WOA has the advantages of high search accuracy, fast convergence speed, and avoiding falling into local optimal values.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Adisorn Kittisopaporn ◽  
Pattrawut Chansangiam ◽  
Wicharn Lewkeeratiyutkul

AbstractWe derive an iterative procedure for solving a generalized Sylvester matrix equation $AXB+CXD = E$ A X B + C X D = E , where $A,B,C,D,E$ A , B , C , D , E are conforming rectangular matrices. Our algorithm is based on gradients and hierarchical identification principle. We convert the matrix iteration process to a first-order linear difference vector equation with matrix coefficient. The Banach contraction principle reveals that the sequence of approximated solutions converges to the exact solution for any initial matrix if and only if the convergence factor belongs to an open interval. The contraction principle also gives the convergence rate and the error analysis, governed by the spectral radius of the associated iteration matrix. We obtain the fastest convergence factor so that the spectral radius of the iteration matrix is minimized. In particular, we obtain iterative algorithms for the matrix equation $AXB=C$ A X B = C , the Sylvester equation, and the Kalman–Yakubovich equation. We give numerical experiments of the proposed algorithm to illustrate its applicability, effectiveness, and efficiency.


Sign in / Sign up

Export Citation Format

Share Document