Unravelling the Sensitivity of Two Motif Structures Under Random Perturbation

Author(s):  
Suvankar Halder ◽  
Samrat Chatterjee ◽  
Nandadulal Bairagi
Keyword(s):  
2021 ◽  
Vol 15 ◽  
pp. 174830262110084
Author(s):  
Jingsen Liu ◽  
Hongyuan Ji ◽  
Qingqing Liu ◽  
Yu Li

In order to improve the convergence speed and optimization accuracy of the bat algorithm, a bat optimization algorithm with moderate optimal orientation and random perturbation of trend is proposed. The algorithm introduces the nonlinear variation factor into the velocity update formula of the global search stage to maintain a high diversity of bat populations, thereby enhanced the global exploration ability of the algorithm. At the same time, in the local search stage, the position update equation is changed, and a strategy that towards optimal value modestly is used to improve the ability of the algorithm to local search for deep mining. Finally, the adaptive decreasing random perturbation is performed on each bat individual that have been updated in position at each generation, which can improve the ability of the algorithm to jump out of the local extremum, and to balance the early global search extensiveness and the later local search accuracy. The simulating results show that the improved algorithm has a faster optimization speed and higher optimization accuracy.


2018 ◽  
Vol 540 ◽  
pp. 26-59 ◽  
Author(s):  
Sean O'Rourke ◽  
Van Vu ◽  
Ke Wang
Keyword(s):  

Geophysics ◽  
2016 ◽  
Vol 81 (5) ◽  
pp. R293-R305 ◽  
Author(s):  
Sireesh Dadi ◽  
Richard Gibson ◽  
Kainan Wang

Upscaling log measurements acquired at high frequencies and correlating them with corresponding low-frequency values from surface seismic and vertical seismic profile data is a challenging task. We have applied a sampling technique called the reversible jump Markov chain Monte Carlo (RJMCMC) method to this problem. A key property of our approach is that it treats the number of unknowns itself as a parameter to be determined. Specifically, we have considered upscaling as an inverse problem in which we considered the number of coarse layers, layer boundary depths, and material properties as the unknowns. The method applies Bayesian inversion, with RJMCMC sampling and uses simulated annealing to guide the optimization. At each iteration, the algorithm will randomly move a boundary in the current model, add a new boundary, or delete an existing boundary. In each case, a random perturbation is applied to Backus-average values. We have developed examples showing that the mismatch between seismograms computed from the upscaled model and log velocities improves by 89% compared to the case in which the algorithm is allowed to move boundaries only. The layer boundary distributions after running the RJMCMC algorithm can represent sharp and gradual changes in lithology. The maximum deviation of upscaled velocities from Backus-average values is less than 10% with most of the values close to zero.


Author(s):  
M. Kamenskii ◽  
S. Pergamenchtchikov ◽  
M. Quincampoix

We consider boundary-value problems for differential equations of second order containing a Brownian motion (random perturbation) and a small parameter and prove a special existence and uniqueness theorem for random solutions. We study the asymptotic behaviour of these solutions as the small parameter goes to zero and show the stochastic averaging theorem for such equations. We find the explicit limits for the solutions as the small parameter goes to zero.


Author(s):  
Xiaotong Lu ◽  
Han Huang ◽  
Weisheng Dong ◽  
Xin Li ◽  
Guangming Shi

Network pruning has been proposed as a remedy for alleviating the over-parameterization problem of deep neural networks. However, its value has been recently challenged especially from the perspective of neural architecture search (NAS). We challenge the conventional wisdom of pruning-after-training by proposing a joint search-and-training approach that directly learns a compact network from the scratch. By treating pruning as a search strategy, we present two new insights in this paper: 1) it is possible to expand the search space of networking pruning by associating each filter with a learnable weight; 2) joint search-and-training can be conducted iteratively to maximize the learning efficiency. More specifically, we propose a coarse-to-fine tuning strategy to iteratively sample and update compact sub-network to approximate the target network. The weights associated with network filters will be accordingly updated by joint search-and-training to reflect learned knowledge in NAS space. Moreover, we introduce strategies of random perturbation (inspired by Monte Carlo) and flexible thresholding (inspired by Reinforcement Learning) to adjust the weight and size of each layer. Extensive experiments on ResNet and VGGNet demonstrate the superior performance of our proposed method on popular datasets including CIFAR10, CIFAR100 and ImageNet.


2000 ◽  
Author(s):  
Lalit Vedula ◽  
N. Sri Namachchivaya

Abstract The dynamics of a shallow arch subjected to small random external and parametric excitation is invegistated in this work. We develop rigorous methods to replace, in some limiting regime, the original higher dimensional system of equations by a simpler, constructive and rational approximation – a low-dimensional model of the dynamical system. To this end, we study the equations as a random perturbation of a two-dimensional Hamiltonian system. We achieve the model-reduction through stochastic averaging and the reduced Markov process takes its values on a graph with certain glueing conditions at the vertex of the graph. Examination of the reduced Markov process on the graph yields many important results such as mean exit time, stationary probability density function.


Sign in / Sign up

Export Citation Format

Share Document