scholarly journals Self-Directed Online Machine Learning for Topology Optimization

Author(s):  
Changyu Deng ◽  
Yizhou Wang ◽  
Can Qin ◽  
Wei Lu

Abstract Topology optimization by optimally distributing materials in a given domain requires gradient-free optimizers to solve highly complicated problems. However, with hundreds of design variables or more involved, solving such problems would require millions of Finite Element Method (FEM) calculations whose computational cost is huge and impractical. Here we report a Self-directed Online Learning Optimization (SOLO) which integrates Deep Neural Network (DNN) with FEM calculations. A DNN learns and substitutes the objective as a function of design variables. A small number of training data is generated dynamically based on the DNN's prediction of the global optimum. The DNN adapts to the new training data and gives better prediction in the region of interest until convergence. Our algorithm was tested by compliance minimization problems and fluid-structure optimization problems. It reduced the computational time by 2 ~ 5 orders of magnitude compared with directly using heuristic methods, and outperformed all state-of-the-art algorithms tested in our experiments. This approach enables solving large multi-dimensional optimization problems.

2020 ◽  
Vol 10 (11) ◽  
pp. 3691
Author(s):  
Katarzyna Tajs-Zielińska ◽  
Bogdan Bochenek

This paper is focused on the development of a Cellular Automata algorithm with the refined mesh adaptation technique and the implementation of this algorithm in topology optimization problems. Traditionally, a Cellular Automaton is created based on regular discretization of the design domain into a lattice of cells, the states of which are updated by applying simple local rules. It is expected that during the topology optimization process the local rules responsible for the evaluation of cell states can drive the solution to solid/void resulting structures. In the proposed approach, the finite elements are equivalent to cells of an automaton and the states of cells are represented by design variables. While optimizing engineering structural elements, the important issue is to obtain well-defined solutions: in particular, topologies with smooth boundaries. The quality of the structural topology boundaries depends on the resolution level of mesh discretization: the greater the number of elements in the mesh, the better the representation of the optimized structure. However, the use of fine meshes implies a high computational cost. We propose, therefore, an adaptive way to refine the mesh. This allowed us to reduce the number of design variables without losing the accuracy of results and without an excessive increase in the number of elements caused by use of a fine mesh for a whole structure. In particular, it is not necessary to cover void regions with a very fine mesh. The implementation of a fine grid is expected mainly in the so-called grey regions where it has to be decided whether a cell becomes solid or void. The benefit of the proposed approach, besides the possibility of obtaining high-resolution, sharply resolved fine optimal topologies with a relatively low computational cost, is also that the checkerboard effect, mesh dependency, and the so-called grey areas can be eliminated without using any additional filtering. Moreover, the algorithm presented is versatile, which allows its easy combination with any structural analysis solver built on the finite element method.


Mathematics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 19
Author(s):  
Saúl Zapotecas-Martínez ◽  
Abel García-Nájera ◽  
Adriana Menchaca-Méndez

One of the major limitations of evolutionary algorithms based on the Lebesgue measure for multi-objective optimization is the computational cost required to approximate the Pareto front of a problem. Nonetheless, the Pareto compliance property of the Lebesgue measure makes it one of the most investigated indicators in the designing of indicator-based evolutionary algorithms (IBEAs). The main deficiency of IBEAs that use the Lebesgue measure is their computational cost which increases with the number of objectives of the problem. On this matter, the investigation presented in this paper introduces an evolutionary algorithm based on the Lebesgue measure to deal with box-constrained continuous multi-objective optimization problems. The proposed algorithm implicitly uses the regularity property of continuous multi-objective optimization problems that has suggested effectiveness when solving continuous problems with rough Pareto sets. On the other hand, the survival selection mechanism considers the local property of the Lebesgue measure, thus reducing the computational time in our algorithmic approach. The emerging indicator-based evolutionary algorithm is examined and compared versus three state-of-the-art multi-objective evolutionary algorithms based on the Lebesgue measure. In addition, we validate its performance on a set of artificial test problems with various characteristics, including multimodality, separability, and various Pareto front forms, incorporating concavity, convexity, and discontinuity. For a more exhaustive study, the proposed algorithm is evaluated in three real-world applications having four, five, and seven objective functions whose properties are unknown. We show the high competitiveness of our proposed approach, which, in many cases, improved the state-of-the-art indicator-based evolutionary algorithms on the multi-objective problems adopted in our investigation.


Author(s):  
Mitsuo Yoshimura ◽  
Koji Shimoyama ◽  
Takashi Misaka ◽  
Shigeru Obayashi

This paper proposes a novel approach for fluid topology optimization using genetic algorithm. In this study, the enhancement of mixing in the passive micromixers is considered. The efficient mixing is achieved by the grooves attached on the bottom of the microchannel and the optimal configuration of grooves is investigated. The grooves are represented based on the graph theory. The micromixers are analyzed by a CFD solver and the exploration by genetic algorithm is assisted by the Kriging model in order to reduce the computational cost. Three cases with different constraint and treatment for design variables are considered. In each case, GA found several local optima since the objective function is a multi-modal function and each local optimum revealed the specific characteristic for efficient mixing in micromixers. Moreover, we discuss the validity of the constraint for optimization problems. The results show a novel insight for design of micromixer and fluid topology optimization using genetic algorithm.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 511
Author(s):  
Syed Mohammad Minhaz Hossain ◽  
Kaushik Deb ◽  
Pranab Kumar Dhar ◽  
Takeshi Koshiba

Proper plant leaf disease (PLD) detection is challenging in complex backgrounds and under different capture conditions. For this reason, initially, modified adaptive centroid-based segmentation (ACS) is used to trace the proper region of interest (ROI). Automatic initialization of the number of clusters (K) using modified ACS before recognition increases tracing ROI’s scalability even for symmetrical features in various plants. Besides, convolutional neural network (CNN)-based PLD recognition models achieve adequate accuracy to some extent. However, memory requirements (large-scaled parameters) and the high computational cost of CNN-based PLD models are burning issues for the memory restricted mobile and IoT-based devices. Therefore, after tracing ROIs, three proposed depth-wise separable convolutional PLD (DSCPLD) models, such as segmented modified DSCPLD (S-modified MobileNet), segmented reduced DSCPLD (S-reduced MobileNet), and segmented extended DSCPLD (S-extended MobileNet), are utilized to represent the constructive trade-off among accuracy, model size, and computational latency. Moreover, we have compared our proposed DSCPLD recognition models with state-of-the-art models, such as MobileNet, VGG16, VGG19, and AlexNet. Among segmented-based DSCPLD models, S-modified MobileNet achieves the best accuracy of 99.55% and F1-sore of 97.07%. Besides, we have simulated our DSCPLD models using both full plant leaf images and segmented plant leaf images and conclude that, after using modified ACS, all models increase their accuracy and F1-score. Furthermore, a new plant leaf dataset containing 6580 images of eight plants was used to experiment with several depth-wise separable convolution models.


2021 ◽  
Vol 12 (4) ◽  
pp. 98-116
Author(s):  
Noureddine Boukhari ◽  
Fatima Debbat ◽  
Nicolas Monmarché ◽  
Mohamed Slimane

Evolution strategies (ES) are a family of strong stochastic methods for global optimization and have proved their capability in avoiding local optima more than other optimization methods. Many researchers have investigated different versions of the original evolution strategy with good results in a variety of optimization problems. However, the convergence rate of the algorithm to the global optimum stays asymptotic. In order to accelerate the convergence rate, a hybrid approach is proposed using the nonlinear simplex method (Nelder-Mead) and an adaptive scheme to control the local search application, and the authors demonstrate that such combination yields significantly better convergence. The new proposed method has been tested on 15 complex benchmark functions and applied to the bi-objective portfolio optimization problem and compared with other state-of-the-art techniques. Experimental results show that the performance is improved by this hybridization in terms of solution eminence and strong convergence.


2021 ◽  
Author(s):  
Zuanjia Xie ◽  
Chunliang Zhang ◽  
Haibin Ouyang ◽  
Steven Li ◽  
Liqun Gao

Abstract Jaya algorithm is an advanced optimization algorithm, which has been applied to many real-world optimization problems. Jaya algorithm has better performance in some optimization field. However, Jaya algorithm exploration capability is not better. In order to enhance exploration capability of the Jaya algorithm, a self-adaptively commensal learning-based Jaya algorithm with multi-populations (Jaya-SCLMP) is presented in this paper. In Jaya-SCLMP, a commensal learning strategy is used to increase the probability of finding the global optimum, in which the person history best and worst information is used to explore new solution area. Moreover, a multi-populations strategy based on Gaussian distribution scheme and learning dictionary is utilized to enhance the exploration capability, meanwhile every sub-population employed three Gaussian distributions at each generation, roulette wheel selection is employed to choose a scheme based on learning dictionary. The performance of Jaya-SCLMP is evaluated based on 28 CEC 2013 unconstrained benchmark problems. In addition, three reliability problems, i.e. complex (bridge) system, series system and series-parallel system are selected. Compared with several Jaya variants and several state-of-the-art other algorithms, the experimental results reveal that Jaya-SCLMP is effective.


2012 ◽  
Vol 20 (3) ◽  
pp. 453-472 ◽  
Author(s):  
Alexandre Devert ◽  
Thomas Weise ◽  
Ke Tang

This paper presents a comparative study of two indirect solution representations, a generative and an ontogenic one, on a set of well-known 2D truss design problems. The generative representation encodes the parameters of a trusses design as a mapping from a 2D space. The ontogenic representation encodes truss design parameters as a local truss transformation iterated several times, starting from a trivial initial truss. Both representations are tested with a naive evolution strategy based optimization scheme, as well as the state of the art HyperNEAT approach. We focus both on the best objective value obtained and the computational cost to reach a given level of optimality. The study shows that the two solution representations behave very differently. For experimental settings with equal complexity, with the same optimization scheme and settings, the generative representation provides results which are far from optimal, whereas the ontogenic representation delivers near-optimal solutions. The ontogenic representation is also much less computationally expensive than a direct representation until very close to the global optimum. The study questions the scalability of the generative representations, while the results for the ontogenic representation display much better scalability.


Author(s):  
Ali Al-Alili ◽  
Yunho Hwang ◽  
Reinhard Radermacher

In order for the solar air conditioners (A/Cs) to become a real alternative to the conventional systems, their performance and total cost has to be optimized. In this study, an innovative hybrid solar A/C was simulated using the transient systems simulation (TRNSYS) program, which was coupled with MATLAB in order to carry out the optimization study. Two optimization problems were formulated with the following design variables: collector area, collector mass flow rate, storage tank volume, and number of batteries. The Genetic Algorithm (GA) was selected to find the global optimum design for the lowest electrical consumption. To optimize the two objective functions simultaneously, a Multi-Objective Genetic Algorithm (MOGA) was used to find the Pareto front within the design variables’ bounds while satisfying the constraints. The optimized design was also compared to a standard vapor compression cycle. The results show that coupling TRNSYS and MATLAB expands TRNSYS optimization capability in solving more complicated optimization problems.


2020 ◽  
Author(s):  
Hossein Foroozand ◽  
Steven V. Weijs

<p>Machine learning is the fast-growing branch of data-driven models, and its main objective is to use computational methods to become more accurate in predicting outcomes without being explicitly programmed. In this field, a way to improve model predictions is to use a large collection of models (called ensemble) instead of a single one. Each model is then trained on slightly different samples of the original data, and their predictions are averaged. This is called bootstrap aggregating, or Bagging, and is widely applied. A recurring question in previous works was: how to choose the ensemble size of training data sets for tuning the weights in machine learning? The computational cost of ensemble-based methods scales with the size of the ensemble, but excessively reducing the ensemble size comes at the cost of reduced predictive performance. The choice of ensemble size was often determined based on the size of input data and available computational power, which can become a limiting factor for larger datasets and complex models’ training. In this research, it is our hypothesis that if an ensemble of artificial neural networks (ANN) models or any other machine learning technique uses the most informative ensemble members for training purpose rather than all bootstrapped ensemble members, it could reduce the computational time substantially without negatively affecting the performance of simulation.</p>


2017 ◽  
Vol 89 (4) ◽  
pp. 609-619 ◽  
Author(s):  
Witold Artur Klimczyk ◽  
Zdobyslaw Jan Goraj

Purpose This paper aims to address the issue of designing aerodynamically robust empennage. Aircraft design optimization often narrowed to analysis of cruise conditions does not take into account other flight phases (manoeuvres). These, especially in unmanned air vehicle sector, can be significant part of the whole flight. Empennage is a part of the aircraft, with crucial function for manoeuvres. It is important to consider robustness for highest performance. Design/methodology/approach Methodology for robust wing design is presented. Surrogate modelling using kriging is used to reduce the optimization cost for high-fidelity aerodynamic calculations. Analysis of varying flight conditions, angle of attack, is made to assess robustness of design for particular mission. Two cases are compared: global optimization of 11 parameters and optimization divided into two consecutive sub-optimizations. Findings Surrogate modelling proves its usefulness for cutting computational time. Optimum design found by splitting problem into sub-optimizations finds better design at lower computational cost. Practical implications It is demonstrated, how surrogate modelling can be used for analysis of robustness, and why it is important to consider it. Intuitive split of wing design into airfoil and planform sub-optimizations brings promising savings in the optimization cost. Originality/value Methodology presented in this paper can be used in various optimization problems, especially those involving expensive computations and requiring top quality design.


Sign in / Sign up

Export Citation Format

Share Document