scholarly journals Global convergence of model function based Bregman proximal minimization algorithms

Author(s):  
Mahesh Chandra Mukkamala ◽  
Jalal Fadili ◽  
Peter Ochs

AbstractLipschitz continuity of the gradient mapping of a continuously differentiable function plays a crucial role in designing various optimization algorithms. However, many functions arising in practical applications such as low rank matrix factorization or deep neural network problems do not have a Lipschitz continuous gradient. This led to the development of a generalized notion known as the L-smad property, which is based on generalized proximity measures called Bregman distances. However, the L-smad property cannot handle nonsmooth functions, for example, simple nonsmooth functions like $$\vert x^4-1 \vert $$ | x 4 - 1 | and also many practical composite problems are out of scope. We fix this issue by proposing the MAP property, which generalizes the L-smad property and is also valid for a large class of structured nonconvex nonsmooth composite problems. Based on the proposed MAP property, we propose a globally convergent algorithm called Model BPG, that unifies several existing algorithms. The convergence analysis is based on a new Lyapunov function. We also numerically illustrate the superior performance of Model BPG on standard phase retrieval problems and Poisson linear inverse problems, when compared to a state of the art optimization method that is valid for generic nonconvex nonsmooth optimization problems.

2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Octavio Camarena ◽  
Erik Cuevas ◽  
Marco Pérez-Cisneros ◽  
Fernando Fausto ◽  
Adrián González ◽  
...  

The Locust Search (LS) algorithm is a swarm-based optimization method inspired in the natural behavior of the desert locust. LS considers the inclusion of two distinctive nature-inspired search mechanism, namely, their solitary phase and social phase operators. These interesting search schemes allow LS to overcome some of the difficulties that commonly affect other similar methods, such as premature convergence and the lack of diversity on solutions. Recently, computer vision experiments in insect tracking methods have conducted to the development of more accurate locust motion models than those produced by simple behavior observations. The most distinctive characteristic of such new models is the use of probabilities to emulate the locust decision process. In this paper, a modification to the original LS algorithm, referred to as LS-II, is proposed to better handle global optimization problems. In LS-II, the locust motion model of the original algorithm is modified incorporating the main characteristics of the new biological formulations. As a result, LS-II improves its original capacities of exploration and exploitation of the search space. In order to test its performance, the proposed LS-II method is compared against several the state-of-the-art evolutionary methods considering a set of benchmark functions and engineering problems. Experimental results demonstrate the superior performance of the proposed approach in terms of solution quality and robustness.


Author(s):  
J. Gu ◽  
G. Y. Li ◽  
Z. Dong

Metamodeling techniques are increasingly used in solving computation intensive design optimization problems today. In this work, the issue of automatic identification of appropriate metamodeling techniques in global optimization is addressed. A generic, new hybrid metamodel based global optimization method, particularly suitable for design problems involving computation intensive, black-box analyses and simulations, is introduced. The method employs three representative metamodels concurrently in the search process and selects sample data points adaptively according to the values calculated using the three metamodels to improve the accuracy of modeling. The global optimum is identified when the metamodels become reasonably accurate. The new method is tested using various benchmark global optimization problems and applied to a real industrial design optimization problem involving vehicle crash simulation, to demonstrate the superior performance of the new algorithm over existing search methods. Present limitations of the proposed method are also discussed.


2020 ◽  
Vol 12 (16) ◽  
pp. 2535
Author(s):  
Xiaoxu Ren ◽  
Liangfu Lu ◽  
Jocelyn Chanussot

In recent years, fusing hyperspectral images (HSIs) and multispectral images (MSIs) to acquire super-resolution images (SRIs) has been in the spotlight and gained tremendous attention. However, some current methods, such as those based on low rank matrix decomposition, also have a fair share of challenges. These algorithms carry out the matrixing process for the original image tensor, which will lose the structure information of the original image. In addition, there is no corresponding theory to prove whether the algorithm can guarantee the accurate restoration of the fused image due to the non-uniqueness of matrix decomposition. Moreover, degenerate operators are usually unknown or difficult to estimate in some practical applications. In this paper, an image fusion method based on joint tensor decomposition (JTF) is proposed, which is more effective and more applicable to the circumstance that degenerate operators are unknown or tough to gauge. Specifically, in the proposed JTF method, we consider SRI as a three-dimensional tensor and redefine the fusion problem with the decomposition issue of joint tensors. We then formulate the JTF algorithm, and the experimental results certify the superior performance of the proposed method in comparison to the current popular schemes.


2015 ◽  
Vol 2015 ◽  
pp. 1-17 ◽  
Author(s):  
Sen Zhang ◽  
Yongquan Zhou

One heuristic evolutionary algorithm recently proposed is the grey wolf optimizer (GWO), inspired by the leadership hierarchy and hunting mechanism of grey wolves in nature. This paper presents an extended GWO algorithm based on Powell local optimization method, and we call it PGWO. PGWO algorithm significantly improves the original GWO in solving complex optimization problems. Clustering is a popular data analysis and data mining technique. Hence, the PGWO could be applied in solving clustering problems. In this study, first the PGWO algorithm is tested on seven benchmark functions. Second, the PGWO algorithm is used for data clustering on nine data sets. Compared to other state-of-the-art evolutionary algorithms, the results of benchmark and data clustering demonstrate the superior performance of PGWO algorithm.


Author(s):  
Tai D. Nguyen ◽  
Ronald Gronsky ◽  
Jeffrey B. Kortright

Nanometer period Ru/C multilayers are one of the prime candidates for normal incident reflecting mirrors at wavelengths < 10 nm. Superior performance, which requires uniform layers and smooth interfaces, and high stability of the layered structure under thermal loadings are some of the demands in practical applications. Previous studies however show that the Ru layers in the 2 nm period Ru/C multilayer agglomerate upon moderate annealing, and the layered structure is no longer retained. This agglomeration and crystallization of the Ru layers upon annealing to form almost spherical crystallites is a result of the reduction of surface or interfacial energy from die amorphous high energy non-equilibrium state of the as-prepared sample dirough diffusive arrangements of the atoms. Proposed models for mechanism of thin film agglomeration include one analogous to Rayleigh instability, and grain boundary grooving in polycrystalline films. These models however are not necessarily appropriate to explain for the agglomeration in the sub-nanometer amorphous Ru layers in Ru/C multilayers. The Ru-C phase diagram shows a wide miscible gap, which indicates the preference of phase separation between these two materials and provides an additional driving force for agglomeration. In this paper, we study the evolution of the microstructures and layered structure via in-situ Transmission Electron Microscopy (TEM), and attempt to determine the order of occurence of agglomeration and crystallization in the Ru layers by observing the diffraction patterns.


Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 494
Author(s):  
Ekaterina Andriushchenko ◽  
Ants Kallaste ◽  
Anouar Belahcen ◽  
Toomas Vaimann ◽  
Anton Rassõlkin ◽  
...  

In recent decades, the genetic algorithm (GA) has been extensively used in the design optimization of electromagnetic devices. Despite the great merits possessed by the GA, its processing procedure is highly time-consuming. On the contrary, the widely applied Taguchi optimization method is faster with comparable effectiveness in certain optimization problems. This study explores the abilities of both methods within the optimization of a permanent magnet coupling, where the optimization objectives are the minimization of coupling volume and maximization of transmitted torque. The optimal geometry of the coupling and the obtained characteristics achieved by both methods are nearly identical. The magnetic torque density is enhanced by more than 20%, while the volume is reduced by 17%. Yet, the Taguchi method is found to be more time-efficient and effective within the considered optimization problem. Thanks to the additive manufacturing techniques, the initial design and the sophisticated geometry of the Taguchi optimal designs are precisely fabricated. The performances of the coupling designs are validated using an experimental setup.


2021 ◽  
Vol 14 (1) ◽  
Author(s):  
Chaofeng Li ◽  
Xiaofeng Lin ◽  
Xing Ling ◽  
Shuo Li ◽  
Hao Fang

Abstract Background The biomanufacturing of d-glucaric acid has attracted increasing interest because it is one of the top value-added chemicals produced from biomass. Saccharomyces cerevisiae is regarded as an excellent host for d-glucaric acid production. Results The opi1 gene was knocked out because of its negative regulation on myo-inositol synthesis, which is the limiting step of d-glucaric acid production by S. cerevisiae. We then constructed the biosynthesis pathway of d-glucaric acid in S. cerevisiae INVSc1 opi1Δ and obtained two engineered strains, LGA-1 and LGA-C, producing record-breaking titers of d-glucaric acid: 9.53 ± 0.46 g/L and 11.21 ± 0.63 g/L d-glucaric acid from 30 g/L glucose and 10.8 g/L myo-inositol in fed-batch fermentation mode, respectively. However, LGA-1 was preferable because of its genetic stability and its superior performance in practical applications. There have been no reports on d-glucaric acid production from lignocellulose. Therefore, the biorefinery processes, including separated hydrolysis and fermentation (SHF), simultaneous saccharification and fermentation (SSF) and consolidated bioprocessing (CBP) were investigated and compared. CBP using an artificial microbial consortium composed of Trichoderma reesei (T. reesei) Rut-C30 and S. cerevisiae LGA-1 was found to have relatively high d-glucaric acid titers and yields after 7 d of fermentation, 0.54 ± 0.12 g/L d-glucaric acid from 15 g/L Avicel and 0.45 ± 0.06 g/L d-glucaric acid from 15 g/L steam-exploded corn stover (SECS), respectively. In an attempt to design the microbial consortium for more efficient CBP, the team consisting of T. reesei Rut-C30 and S. cerevisiae LGA-1 was found to be the best, with excellent work distribution and collaboration. Conclusions Two engineered S. cerevisiae strains, LGA-1 and LGA-C, with high titers of d-glucaric acid were obtained. This indicated that S. cerevisiae INVSc1 is an excellent host for d-glucaric acid production. Lignocellulose is a preferable substrate over myo-inositol. SHF, SSF, and CBP were studied, and CBP using an artificial microbial consortium of T. reesei Rut-C30 and S. cerevisiae LGA-1 was found to be promising because of its relatively high titer and yield. T. reesei Rut-C30 and S. cerevisiae LGA-1were proven to be the best teammates for CBP. Further work should be done to improve the efficiency of this microbial consortium for d-glucaric acid production from lignocellulose.


Symmetry ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 136
Author(s):  
Wenxiao Li ◽  
Yushui Geng ◽  
Jing Zhao ◽  
Kang Zhang ◽  
Jianxin Liu

This paper explores the combination of a classic mathematical function named “hyperbolic tangent” with a metaheuristic algorithm, and proposes a novel hybrid genetic algorithm called NSGA-II-BnF for multi-objective decision making. Recently, many metaheuristic evolutionary algorithms have been proposed for tackling multi-objective optimization problems (MOPs). These algorithms demonstrate excellent capabilities and offer available solutions to decision makers. However, their convergence performance may be challenged by some MOPs with elaborate Pareto fronts such as CFs, WFGs, and UFs, primarily due to the neglect of diversity. We solve this problem by proposing an algorithm with elite exploitation strategy, which contains two parts: first, we design a biased elite allocation strategy, which allocates computation resources appropriately to elites of the population by crowding distance-based roulette. Second, we propose a self-guided fast individual exploitation approach, which guides elites to generate neighbors by a symmetry exploitation operator, which is based on mathematical hyperbolic tangent function. Furthermore, we designed a mechanism to emphasize the algorithm’s applicability, which allows decision makers to adjust the exploitation intensity with their preferences. We compare our proposed NSGA-II-BnF with four other improved versions of NSGA-II (NSGA-IIconflict, rNSGA-II, RPDNSGA-II, and NSGA-II-SDR) and four competitive and widely-used algorithms (MOEA/D-DE, dMOPSO, SPEA-II, and SMPSO) on 36 test problems (DTLZ1–DTLZ7, WGF1–WFG9, UF1–UF10, and CF1–CF10), and measured using two widely used indicators—inverted generational distance (IGD) and hypervolume (HV). Experiment results demonstrate that NSGA-II-BnF exhibits superior performance to most of the algorithms on all test problems.


Sign in / Sign up

Export Citation Format

Share Document