Introducing Elitist Black-Box Models: When Does Elitist Behavior Weaken the Performance of Evolutionary Algorithms?

2017 ◽  
Vol 25 (4) ◽  
pp. 587-606 ◽  
Author(s):  
Carola Doerr ◽  
Johannes Lengler

Black-box complexity theory provides lower bounds for the runtime of black-box optimizers like evolutionary algorithms and other search heuristics and serves as an inspiration for the design of new genetic algorithms. Several black-box models covering different classes of algorithms exist, each highlighting a different aspect of the algorithms under considerations. In this work we add to the existing black-box notions a new elitist black-box model, in which algorithms are required to base all decisions solely on (the relative performance of) a fixed number of the best search points sampled so far. Our elitist model thus combines features of the ranking-based and the memory-restricted black-box models with an enforced usage of truncation selection. We provide several examples for which the elitist black-box complexity is exponentially larger than that of the respective complexities in all previous black-box models, thus showing that the elitist black-box complexity can be much closer to the runtime of typical evolutionary algorithms. We also introduce the concept of p-Monte Carlo black-box complexity, which measures the time it takes to optimize a problem with failure probability at most p. Even for small  p, the p-Monte Carlo black-box complexity of a function class [Formula: see text] can be smaller by an exponential factor than its typically regarded Las Vegas complexity (which measures the expected time it takes to optimize [Formula: see text]).

Energies ◽  
2020 ◽  
Vol 13 (24) ◽  
pp. 6749
Author(s):  
Reda El Bechari ◽  
Stéphane Brisset ◽  
Stéphane Clénet ◽  
Frédéric Guyomarch ◽  
Jean Claude Mipo

Metamodels proved to be a very efficient strategy for optimizing expensive black-box models, e.g., Finite Element simulation for electromagnetic devices. It enables the reduction of the computational burden for optimization purposes. However, the conventional approach of using metamodels presents limitations such as the cost of metamodel fitting and infill criteria problem-solving. This paper proposes a new algorithm that combines metamodels with a branch and bound (B&B) strategy. However, the efficiency of the B&B algorithm relies on the estimation of the bounds; therefore, we investigated the prediction error given by metamodels to predict the bounds. This combination leads to high fidelity global solutions. We propose a comparison protocol to assess the approach’s performances with respect to those of other algorithms of different categories. Then, two electromagnetic optimization benchmarks are treated. This paper gives practical insights into algorithms that can be used when optimizing electromagnetic devices.


We provide a framework for investment managers to create dynamic pretrade models. The approach helps market participants shed light on vendor black-box models that often do not provide any transparency into the model’s functional form or working mechanics. In addition, this allows portfolio managers to create consensus estimates based on their own expectations, such as forecasted liquidity and volatility, and to incorporate firm proprietary alpha estimates into the solution. These techniques allow managers to reduce overdependency on any one black-box model, incorporate costs into the stock selection and portfolio optimization phase of the investment cycle, and perform “what-if” and sensitivity analyses without the risk of information leakage to any outside party or vendor.


Author(s):  
Kacper Sokol ◽  
Peter Flach

Understanding data, models and predictions is important for machine learning applications. Due to the limitations of our spatial perception and intuition, analysing high-dimensional data is inherently difficult. Furthermore, black-box models achieving high predictive accuracy are widely used, yet the logic behind their predictions is often opaque. Use of textualisation -- a natural language narrative of selected phenomena -- can tackle these shortcomings. When extended with argumentation theory we could envisage machine learning models and predictions arguing persuasively for their choices.


Author(s):  
Marjan Popov ◽  
Bjørn Gustavsen ◽  
Juan A. Martinez-Velasco

Voltage surges arising from transient events, such as switching operations or lightning discharges, are one of the main causes of transformer winding failure. The voltage distribution along a transformer winding depends greatly on the waveshape of the voltage applied to the winding. This distribution is not uniform in the case of steep-fronted transients since a large portion of the applied voltage is usually concentrated on the first few turns of the winding. High frequency electromagnetic transients in transformers can be studied using internal models (i.e., models for analyzing the propagation and distribution of the incident impulse along the transformer windings), and black-box models (i.e., models for analyzing the response of the transformer from its terminals and for calculating voltage transfer). This chapter presents a summary of the most common models developed for analyzing the behaviour of transformers subjected to steep-fronted waves and a description of procedures for determining the parameters to be specified in those models. The main section details some test studies based on actual transformers in which models are validated by comparing simulation results to laboratory measurements.


Author(s):  
Evren Daglarli

Today, the effects of promising technologies such as explainable artificial intelligence (xAI) and meta-learning (ML) on the internet of things (IoT) and the cyber-physical systems (CPS), which are important components of Industry 4.0, are increasingly intensified. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. For these reasons, it is necessary to make serious efforts on the explanability and interpretability of black box models. In the near future, the integration of explainable artificial intelligence and meta-learning approaches to cyber-physical systems will have effects on a high level of virtualization and simulation infrastructure, real-time supply chain, cyber factories with smart machines communicating over the internet, maximizing production efficiency, analysis of service quality and competition level.


Processes ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. 749 ◽  
Author(s):  
Jorge E. Jiménez-Hornero ◽  
Inés María Santos-Dueñas ◽  
Isidoro García-García

Modelling techniques allow certain processes to be characterized and optimized without the need for experimentation. One of the crucial steps in vinegar production is the biotransformation of ethanol into acetic acid by acetic bacteria. This step has been extensively studied by using two predictive models: first-principles models and black-box models. The fact that first-principles models are less accurate than black-box models under extreme bacterial growth conditions suggests that the kinetic equations used by the former, and hence their goodness of fit, can be further improved. By contrast, black-box models predict acetic acid production accurately enough under virtually any operating conditions. In this work, we trained black-box models based on Artificial Neural Networks (ANNs) of the multilayer perceptron (MLP) type and containing a single hidden layer to model acetification. The small number of data typically available for a bioprocess makes it rather difficult to identify the most suitable type of ANN architecture in terms of indices such as the mean square error (MSE). This places ANN methodology at a disadvantage against alternative techniques and, especially, polynomial modelling.


Sign in / Sign up

Export Citation Format

Share Document