scholarly journals A Study on Many-Objective Optimization Using the Kriging-Surrogate-Based Evolutionary Algorithm Maximizing Expected Hypervolume Improvement

2015 ◽  
Vol 2015 ◽  
pp. 1-15 ◽  
Author(s):  
Chang Luo ◽  
Koji Shimoyama ◽  
Shigeru Obayashi

The many-objective optimization performance of the Kriging-surrogate-based evolutionary algorithm (EA), which maximizes expected hypervolume improvement (EHVI) for updating the Kriging model, is investigated and compared with those using expected improvement (EI) and estimation (EST) updating criteria in this paper. Numerical experiments are conducted in 3- to 15-objective DTLZ1-7 problems. In the experiments, an exact hypervolume calculating algorithm is used for the problems with less than six objectives. On the other hand, an approximate hypervolume calculating algorithm based on Monte Carlo sampling is adopted for the problems with more objectives. The results indicate that, in the nonconstrained case, EHVI is a highly competitive updating criterion for the Kriging model and EA based many-objective optimization, especially when the test problem is complex and the number of objectives or design variables is large.

2021 ◽  
Author(s):  
◽  
Mashall Aryan

<p>The solution to many science and engineering problems includes identifying the minimum or maximum of an unknown continuous function whose evaluation inflicts non-negligible costs in terms of resources such as money, time, human attention or computational processing. In such a case, the choice of new points to evaluate is critical. A successful approach has been to choose these points by considering a distribution over plausible surfaces, conditioned on all previous points and their evaluations. In this sequential bi-step strategy, also known as Bayesian Optimization, first a prior is defined over possible functions and updated to a posterior in the light of available observations. Then using this posterior, namely the surrogate model, an infill criterion is formed and utilized to find the next location to sample from. By far the most common prior distribution and infill criterion are Gaussian Process and Expected Improvement, respectively.    The popularity of Gaussian Processes in Bayesian optimization is partially due to their ability to represent the posterior in closed form. Nevertheless, the Gaussian Process is afflicted with several shortcomings that directly affect its performance. For example, inference scales poorly with the amount of data, numerical stability degrades with the number of data points, and strong assumptions about the observation model are required, which might not be consistent with reality. These drawbacks encourage us to seek better alternatives. This thesis studies the application of Neural Networks to enhance Bayesian Optimization. It proposes several Bayesian optimization methods that use neural networks either as their surrogates or in the infill criterion.    This thesis introduces a novel Bayesian Optimization method in which Bayesian Neural Networks are used as a surrogate. This has reduced the computational complexity of inference in surrogate from cubic (on the number of observation) in GP to linear. Different variations of Bayesian Neural Networks (BNN) are put into practice and inferred using a Monte Carlo sampling. The results show that Monte Carlo Bayesian Neural Network surrogate could performed better than, or at least comparably to the Gaussian Process-based Bayesian optimization methods on a set of benchmark problems.  This work develops a fast Bayesian Optimization method with an efficient surrogate building process. This new Bayesian Optimization algorithm utilizes Bayesian Random-Vector Functional Link Networks as surrogate. In this family of models the inference is only performed on a small subset of the entire model parameters and the rest are randomly drawn from a prior. The proposed methods are tested on a set of benchmark continuous functions and hyperparameter optimization problems and the results show the proposed methods are competitive with state-of-the-art Bayesian Optimization methods.  This study proposes a novel Neural network-based infill criterion. In this method locations to sample from are found by minimizing the joint conditional likelihood of the new point and parameters of a neural network. The results show that in Bayesian Optimization methods with Bayesian Neural Network surrogates, this new infill criterion outperforms the expected improvement.   Finally, this thesis presents order-preserving generative models and uses it in a variational Bayesian context to infer Implicit Variational Bayesian Neural Network (IVBNN) surrogates for a new Bayesian Optimization. This new inference mechanism is more efficient and scalable than Monte Carlo sampling. The results show that IVBNN could outperform Monte Carlo BNN in Bayesian optimization of hyperparameters of machine learning models.</p>


2005 ◽  
Vol 04 (02) ◽  
pp. 397-409 ◽  
Author(s):  
ANTHONY SCEMAMA

An algorithm is introduced for the search of a volume, in the three-dimensional space, which maximizes the probability of finding να up electrons and νβ down electrons inside the volume, all the other electrons being outside of it. This search is performed after a Variational Monte Carlo sampling of the N-particle density generated by the wave function.


Author(s):  
Markus Kiderlen ◽  
Florian Pausinger

AbstractWe extend the notion of jittered sampling to arbitrary partitions and study the discrepancy of the related point sets. Let $${\varvec{\Omega }}=(\Omega _1,\ldots ,\Omega _N)$$ Ω = ( Ω 1 , … , Ω N ) be a partition of $$[0,1]^d$$ [ 0 , 1 ] d and let the ith point in $${{\mathcal {P}}}$$ P be chosen uniformly in the ith set of the partition (and stochastically independent of the other points), $$i=1,\ldots ,N$$ i = 1 , … , N . For the study of such sets we introduce the concept of a uniformly distributed triangular array and compare this notion to related notions in the literature. We prove that the expected $${{{\mathcal {L}}}_p}$$ L p -discrepancy, $${{\mathbb {E}}}{{{\mathcal {L}}}_p}({{\mathcal {P}}}_{\varvec{\Omega }})^p$$ E L p ( P Ω ) p , of a point set $${{\mathcal {P}}}_{\varvec{\Omega }}$$ P Ω generated from any equivolume partition $${\varvec{\Omega }}$$ Ω is always strictly smaller than the expected $${{{\mathcal {L}}}_p}$$ L p -discrepancy of a set of N uniform random samples for $$p>1$$ p > 1 . For fixed N we consider classes of stratified samples based on equivolume partitions of the unit cube into convex sets or into sets with a uniform positive lower bound on their reach. It is shown that these classes contain at least one minimizer of the expected $${{{\mathcal {L}}}_p}$$ L p -discrepancy. We illustrate our results with explicit constructions for small N. In addition, we present a family of partitions that seems to improve the expected discrepancy of Monte Carlo sampling by a factor of 2 for every N.


2021 ◽  
Author(s):  
◽  
Mashall Aryan

<p>The solution to many science and engineering problems includes identifying the minimum or maximum of an unknown continuous function whose evaluation inflicts non-negligible costs in terms of resources such as money, time, human attention or computational processing. In such a case, the choice of new points to evaluate is critical. A successful approach has been to choose these points by considering a distribution over plausible surfaces, conditioned on all previous points and their evaluations. In this sequential bi-step strategy, also known as Bayesian Optimization, first a prior is defined over possible functions and updated to a posterior in the light of available observations. Then using this posterior, namely the surrogate model, an infill criterion is formed and utilized to find the next location to sample from. By far the most common prior distribution and infill criterion are Gaussian Process and Expected Improvement, respectively.    The popularity of Gaussian Processes in Bayesian optimization is partially due to their ability to represent the posterior in closed form. Nevertheless, the Gaussian Process is afflicted with several shortcomings that directly affect its performance. For example, inference scales poorly with the amount of data, numerical stability degrades with the number of data points, and strong assumptions about the observation model are required, which might not be consistent with reality. These drawbacks encourage us to seek better alternatives. This thesis studies the application of Neural Networks to enhance Bayesian Optimization. It proposes several Bayesian optimization methods that use neural networks either as their surrogates or in the infill criterion.    This thesis introduces a novel Bayesian Optimization method in which Bayesian Neural Networks are used as a surrogate. This has reduced the computational complexity of inference in surrogate from cubic (on the number of observation) in GP to linear. Different variations of Bayesian Neural Networks (BNN) are put into practice and inferred using a Monte Carlo sampling. The results show that Monte Carlo Bayesian Neural Network surrogate could performed better than, or at least comparably to the Gaussian Process-based Bayesian optimization methods on a set of benchmark problems.  This work develops a fast Bayesian Optimization method with an efficient surrogate building process. This new Bayesian Optimization algorithm utilizes Bayesian Random-Vector Functional Link Networks as surrogate. In this family of models the inference is only performed on a small subset of the entire model parameters and the rest are randomly drawn from a prior. The proposed methods are tested on a set of benchmark continuous functions and hyperparameter optimization problems and the results show the proposed methods are competitive with state-of-the-art Bayesian Optimization methods.  This study proposes a novel Neural network-based infill criterion. In this method locations to sample from are found by minimizing the joint conditional likelihood of the new point and parameters of a neural network. The results show that in Bayesian Optimization methods with Bayesian Neural Network surrogates, this new infill criterion outperforms the expected improvement.   Finally, this thesis presents order-preserving generative models and uses it in a variational Bayesian context to infer Implicit Variational Bayesian Neural Network (IVBNN) surrogates for a new Bayesian Optimization. This new inference mechanism is more efficient and scalable than Monte Carlo sampling. The results show that IVBNN could outperform Monte Carlo BNN in Bayesian optimization of hyperparameters of machine learning models.</p>


2012 ◽  
Vol 630 ◽  
pp. 383-388
Author(s):  
Zheng Li ◽  
Xi Cheng Wang

Balancing the global exploration and the local exploitation has received particular attention in global optimization algorithm. In this paper, based on Kriging model an infill sample criteria named weighting-integral expected improvement is proposed, which provides a high flexibility to balance the scope of search. Coupled with this infill sample criteria, a strategy is proposed that on each iteration step, the infill sample point was selected by the urgency of each search scope. Two mathematical functions and one engineering problem are used to test this method. The numerical experiments show that this method has excellent efficiency in finding global optimum solutions.


2020 ◽  
Vol 498 (1) ◽  
pp. 181-193
Author(s):  
Luca Amendola ◽  
Adrià Gómez-Valent

ABSTRACT We propose a new method, called Monte Carlo Posterior Fit, to boost the Monte Carlo sampling of likelihood (posterior) functions. The idea is to approximate the posterior function by an analytical multidimensional non-Gaussian fit. The many free parameters of this fit can be obtained by a smaller sampling than is needed to derive the full numerical posterior. In the examples that we consider, based on supernovae and cosmic microwave background data, we find that one needs an order of magnitude smaller sampling than in the standard algorithms to achieve comparable precision. This method can be applied to a variety of situations and is expected to significantly improve the performance of the Monte Carlo routines in all the cases in which sampling is very time consuming. Finally, it can also be applied to Fisher matrix forecasts and can help solve various limitations of the standard approach.


2019 ◽  
Vol 62 (3) ◽  
pp. 577-586 ◽  
Author(s):  
Garnett P. McMillan ◽  
John B. Cannon

Purpose This article presents a basic exploration of Bayesian inference to inform researchers unfamiliar to this type of analysis of the many advantages this readily available approach provides. Method First, we demonstrate the development of Bayes' theorem, the cornerstone of Bayesian statistics, into an iterative process of updating priors. Working with a few assumptions, including normalcy and conjugacy of prior distribution, we express how one would calculate the posterior distribution using the prior distribution and the likelihood of the parameter. Next, we move to an example in auditory research by considering the effect of sound therapy for reducing the perceived loudness of tinnitus. In this case, as well as most real-world settings, we turn to Markov chain simulations because the assumptions allowing for easy calculations no longer hold. Using Markov chain Monte Carlo methods, we can illustrate several analysis solutions given by a straightforward Bayesian approach. Conclusion Bayesian methods are widely applicable and can help scientists overcome analysis problems, including how to include existing information, run interim analysis, achieve consensus through measurement, and, most importantly, interpret results correctly. Supplemental Material https://doi.org/10.23641/asha.7822592


Imbizo ◽  
2017 ◽  
Vol 7 (1) ◽  
pp. 40-54
Author(s):  
Oyeh O. Otu

This article examines how female conditioning and sexual repression affect the woman’s sense of self, womanhood, identity and her place in society. It argues that the woman’s body is at the core of the many sites of gender struggles/ politics. Accordingly, the woman’s body must be decolonised for her to attain true emancipation. On the one hand, this study identifies the grave consequences of sexual repression, how it robs women of their freedom to choose whom to love or marry, the freedom to seek legal redress against sexual abuse and terror, and how it hinders their quest for self-determination. On the other hand, it underscores the need to give women sexual freedom that must be respected and enforced by law for the overall good of society.


Sign in / Sign up

Export Citation Format

Share Document