scholarly journals Dynamic particle swarm optimization of biomolecular simulation parameters with flexible objective functions

Author(s):  
Marie Weiel ◽  
Markus Götz ◽  
André Klein ◽  
Daniel Coquelin ◽  
Ralf Floca ◽  
...  

AbstractMolecular simulations are a powerful tool to complement and interpret ambiguous experimental data on biomolecules to obtain structural models. Such data-assisted simulations often rely on parameters, the choice of which is highly non-trivial and crucial to performance. The key challenge is weighting experimental information with respect to the underlying physical model. We introduce FLAPS, a self-adapting variant of dynamic particle swarm optimization, to overcome this parameter selection problem. FLAPS is suited for the optimization of composite objective functions that depend on both the optimization parameters and additional, a priori unknown weighting parameters, which substantially influence the search-space topology. These weighting parameters are learned at runtime, yielding a dynamically evolving and iteratively refined search-space topology. As a practical example, we show how FLAPS can be used to find functional parameters for small-angle X-ray scattering-guided protein simulations.

Author(s):  
Mehmet Sinan Hasanoglu ◽  
Melik Dolen

Constrained optimization problems constitute an important fraction of optimization problems in the mechanical engineering domain. It is not uncommon for these problems to be highly-constrained where a specialized approach that aims to improve constraint satisfaction level of the whole population as well as finding the optimum is deemed useful especially when the objective functions are very costly. A new algorithm called Feasibility Enhanced Particle Swarm Optimization (FEPSO), which treats feasible and infeasible particles differently, is introduced. Infeasible particles in FEPSO do not need to evaluate objective functions and fly only based on social attraction depending on a single violated constraint, called the activated constraint, which is selected at each iteration based on constraint priorities and flight occurs only along dimensions of the search space to which the activated constraint is sensitive. To ensure progressive improvement of constraint satisfaction, particles are not allowed to violate a satisfied constraint in FEPSO. The highly-constrained four-stage gear train problem and its two variants introduced in this paper are used to assess the effectiveness of FEPSO. The results suggest that FEPSO is effective and consistent in obtaining feasible points, finding good solutions, and improving the constraint satisfaction level of the swarm as a whole.


Author(s):  
Ravichander Janapati ◽  
Ch. Balaswamy ◽  
K. Soundararajan

Localization is the key research area in wireless sensor networks. Finding the exact position of the node is known as localization. Different algorithms have been proposed. Here we consider a cooperative localization algorithm with censoring schemes using Crammer Rao bound (CRB). This censoring scheme  can improve the positioning accuracy and reduces computation complexity, traffic and latency. Particle swarm optimization (PSO) is a population based search algorithm based on the swarm intelligence like social behavior of birds, bees or a school of fishes. To improve the algorithm efficiency and localization precision, this paper presents an objective function based on the normal distribution of ranging error and a method of obtaining the search space of particles. In this paper  Distributed localization of wireless sensor networksis proposed using PSO with best censoring technique using CRB. Proposed method shows better results in terms of position accuracy, latency and complexity.  


Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. R767-R781 ◽  
Author(s):  
Mattia Aleardi ◽  
Silvio Pierini ◽  
Angelo Sajeva

We have compared the performances of six recently developed global optimization algorithms: imperialist competitive algorithm, firefly algorithm (FA), water cycle algorithm (WCA), whale optimization algorithm (WOA), fireworks algorithm (FWA), and quantum particle swarm optimization (QPSO). These methods have been introduced in the past few years and have found very limited or no applications to geophysical exploration problems thus far. We benchmark the algorithms’ results against the particle swarm optimization (PSO), which is a popular and well-established global search method. In particular, we are interested in assessing the exploration and exploitation capabilities of each method as the dimension of the model space increases. First, we test the different algorithms on two multiminima and two convex analytic objective functions. Then, we compare them using the residual statics corrections and 1D elastic full-waveform inversion, which are highly nonlinear geophysical optimization problems. Our results demonstrate that FA, FWA, and WOA are characterized by optimal exploration capabilities because they outperform the other approaches in the case of optimization problems with multiminima objective functions. Differently, QPSO and PSO have good exploitation capabilities because they easily solve ill-conditioned optimizations characterized by a nearly flat valley in the objective function. QPSO, PSO, and WCA offer a good compromise between exploitation and exploration.


2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Hongtao Ye ◽  
Wenguang Luo ◽  
Zhenqiang Li

This paper presents an analysis of the relationship of particle velocity and convergence of the particle swarm optimization. Its premature convergence is due to the decrease of particle velocity in search space that leads to a total implosion and ultimately fitness stagnation of the swarm. An improved algorithm which introduces a velocity differential evolution (DE) strategy for the hierarchical particle swarm optimization (H-PSO) is proposed to improve its performance. The DE is employed to regulate the particle velocity rather than the traditional particle position in case that the optimal result has not improved after several iterations. The benchmark functions will be illustrated to demonstrate the effectiveness of the proposed method.


2012 ◽  
Vol 2012 ◽  
pp. 1-21 ◽  
Author(s):  
S. Sakinah S. Ahmad ◽  
Witold Pedrycz

The study is concerned with data and feature reduction in fuzzy modeling. As these reduction activities are advantageous to fuzzy models in terms of both the effectiveness of their construction and the interpretation of the resulting models, their realization deserves particular attention. The formation of a subset of meaningful features and a subset of essential instances is discussed in the context of fuzzy-rule-based models. In contrast to the existing studies, which are focused predominantly on feature selection (namely, a reduction of the input space), a position advocated here is that a reduction has to involve both data and features to become efficient to the design of fuzzy model. The reduction problem is combinatorial in its nature and, as such, calls for the use of advanced optimization techniques. In this study, we use a technique of particle swarm optimization (PSO) as an optimization vehicle of forming a subset of features and data (instances) to design a fuzzy model. Given the dimensionality of the problem (as the search space involves both features and instances), we discuss a cooperative version of the PSO along with a clustering mechanism of forming a partition of the overall search space. Finally, a series of numeric experiments using several machine learning data sets is presented.


2021 ◽  
Author(s):  
Ahlem Aboud ◽  
Nizar Rokbani ◽  
Seyedali Mirjalili ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
...  

<p>Multifactorial Optimization (MFO) and Evolutionary Transfer Optimization (ETO) are new optimization challenging paradigms for which the multi-Objective Particle Swarm Optimization system (MOPSO) may be interesting despite limitations. MOPSO has been widely used in static/dynamic multi-objective optimization problems, while its potentials for multi-task optimization are not completely unveiled. This paper proposes a new Distributed Multifactorial Particle Swarm Optimization algorithm (DMFPSO) for multi-task optimization. This new system has a distributed architecture on a set of sub-swarms that are dynamically constructed based on the number of optimization tasks affected by each particle skill factor. DMFPSO is designed to deal with the issues of handling convergence and diversity concepts separately. DMFPSO uses Beta function to provide two optimized profiles with a dynamic switching behaviour. The first profile, Beta-1, is used for the exploration which aims to explore the search space toward potential solutions, while the second Beta-2 function is used for convergence enhancement. This new system is tested on 36 benchmarks provided by the CEC’2021 Evolutionary Transfer Multi-Objective Optimization Competition. Comparatives with the state-of-the-art methods are done using the Inverted General Distance (IGD) and Mean Inverted General Distance (MIGD) metrics. Based on the MSS metric, this proposal has the best results on most tested problems.</p>


2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Zhaojuan Zhang ◽  
Wanliang Wang ◽  
Ruofan Xia ◽  
Gaofeng Pan ◽  
Jiandong Wang ◽  
...  

Abstract Background Reconstructing ancestral genomes is one of the central problems presented in genome rearrangement analysis since finding the most likely true ancestor is of significant importance in phylogenetic reconstruction. Large scale genome rearrangements can provide essential insights into evolutionary processes. However, when the genomes are large and distant, classical median solvers have failed to adequately address these challenges due to the exponential increase of the search space. Consequently, solving ancestral genome inference problems constitutes a task of paramount importance that continues to challenge the current methods used in this area, whose difficulty is further increased by the ongoing rapid accumulation of whole-genome data. Results In response to these challenges, we provide two contributions for ancestral genome inference. First, an improved discrete quantum-behaved particle swarm optimization algorithm (IDQPSO) by averaging two of the fitness values is proposed to address the discrete search space. Second, we incorporate DCJ sorting into the IDQPSO (IDQPSO-Median). In comparison with the other methods, when the genomes are large and distant, IDQPSO-Median has the lowest median score, the highest adjacency accuracy, and the closest distance to the true ancestor. In addition, we have integrated our IDQPSO-Median approach with the GRAPPA framework. Our experiments show that this new phylogenetic method is very accurate and effective by using IDQPSO-Median. Conclusions Our experimental results demonstrate the advantages of IDQPSO-Median approach over the other methods when the genomes are large and distant. When our experimental results are evaluated in a comprehensive manner, it is clear that the IDQPSO-Median approach we propose achieves better scalability compared to existing algorithms. Moreover, our experimental results by using simulated and real datasets confirm that the IDQPSO-Median, when integrated with the GRAPPA framework, outperforms other heuristics in terms of accuracy, while also continuing to infer phylogenies that were equivalent or close to the true trees within 5 days of computation, which is far beyond the difficulty level that can be handled by GRAPPA.


2016 ◽  
Vol 138 (8) ◽  
Author(s):  
Forrest W. Flocker ◽  
Ramiro H. Bravo

The particle swarm optimization (PSO) method is becoming a popular optimizer within the mechanical design community because of its simplicity and ability to handle a wide variety of objective functions that characterize a proposed design. Typical examples arising in mechanical design are nonlinear objective functions with many constraints, which typically arise from the various design specifications. The method is particularly attractive to mechanical design because it can handle discontinuous functions that occur when the designer must choose from a discrete set of standard sizes. However, as in other optimizers, the method is susceptible to converging to a local rather than global minimum. In this paper, convergence criteria for the PSO method are investigated and an algorithm is proposed that gives the user a high degree of confidence in finding the global minimum. The proposed algorithm is tested against five benchmark optimization problems, and the results are used to develop specific guidelines for implementation.


Sign in / Sign up

Export Citation Format

Share Document