Bayesian Network Structure Optimization for Improved Design Space Mapping for Design Exploration With Materials Design Applications

Author(s):  
Conner Sharpe ◽  
Clinton Morris ◽  
Benjamin Goldsberry ◽  
Carolyn Conner Seepersad ◽  
Michael R. Haberman

Modern design problems present both opportunities and challenges, including multifunctionality, high dimensionality, highly nonlinear multimodal responses, and multiple levels or scales. These factors are particularly important in materials design problems and make it difficult for traditional optimization algorithms to search the space effectively, and designer intuition is often insufficient in problems of this complexity. Efficient machine learning algorithms can map complex design spaces to help designers quickly identify promising regions of the design space. In particular, Bayesian network classifiers (BNCs) have been demonstrated as effective tools for top-down design of complex multilevel problems. The most common instantiations of BNCs assume that all design variables are independent. This assumption reduces computational cost, but can limit accuracy especially in engineering problems with interacting factors. The ability to learn representative network structures from data could provide accurate maps of the design space with limited computational expense. Population-based stochastic optimization techniques such as genetic algorithms (GAs) are ideal for optimizing networks because they accommodate discrete, combinatorial, and multimodal problems. Our approach utilizes GAs to identify optimal networks based on limited training sets so that future test points can be classified as accurately and efficiently as possible. This method is first tested on a common machine learning data set, and then demonstrated on a sample design problem of a composite material subjected to a planar sound wave.

Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1075
Author(s):  
Nan Chen

Predicting complex nonlinear turbulent dynamical systems is an important and practical topic. However, due to the lack of a complete understanding of nature, the ubiquitous model error may greatly affect the prediction performance. Machine learning algorithms can overcome the model error, but they are often impeded by inadequate and partial observations in predicting nature. In this article, an efficient and dynamically consistent conditional sampling algorithm is developed, which incorporates the conditional path-wise temporal dependence into a two-step forward-backward data assimilation procedure to sample multiple distinct nonlinear time series conditioned on short and partial observations using an imperfect model. The resulting sampled trajectories succeed in reducing the model error and greatly enrich the training data set for machine learning forecasts. For a rich class of nonlinear and non-Gaussian systems, the conditional sampling is carried out by solving a simple stochastic differential equation, which is computationally efficient and accurate. The sampling algorithm is applied to create massive training data of multiscale compressible shallow water flows from highly nonlinear and indirect observations. The resulting machine learning prediction significantly outweighs the imperfect model forecast. The sampling algorithm also facilitates the machine learning forecast of a highly non-Gaussian climate phenomenon using extremely short observations.


Author(s):  
Opeoluwa Owoyele ◽  
Pinaki Pal

Abstract In this work, a novel design optimization technique based on active learning, which involves dynamic exploration and exploitation of the design space of interest using an ensemble of machine learning algorithms, is presented. In this approach, a hybrid methodology incorporating an explorative weak learner (regularized basis function model) which fits high-level information about the response surface, and an exploitative strong learner (based on committee machine) that fits finer details around promising regions identified by the weak learner, is employed. For each design iteration, an aristocratic approach is used to select a set of nominees, where points that meet a threshold merit value as predicted by the weak learner are selected to be evaluated using expensive function evaluation. In addition to these points, the global optimum as predicted by the strong learner is also evaluated to enable rapid convergence to the actual global optimum once the most promising region has been identified by the optimizer. This methodology is first tested by applying it to the optimization of a two-dimensional multi-modal surface. The performance of the new active learning approach is compared with traditional global optimization methods, namely micro-genetic algorithm (μGA) and particle swarm optimization (PSO). It is demonstrated that the new optimizer is able to reach the global optimum much faster, with a significantly fewer number of function evaluations. Subsequently, the new optimizer is also applied to a complex internal combustion (IC) engine combustion optimization case with nine control parameters related to fuel injection, initial thermodynamic conditions, and in-cylinder flow. It is again found that the new approach significantly lowers the number of function evaluations that are needed to reach the optimum design configuration (by up to 80%) when compared to particle swarm and genetic algorithm-based optimization techniques.


2020 ◽  
Vol 143 (3) ◽  
Author(s):  
Opeoluwa Owoyele ◽  
Pinaki Pal

Abstract In this work, a novel design optimization technique based on active learning, which involves dynamic exploration and exploitation of the design space of interest using an ensemble of machine learning algorithms, is presented. In this approach, a hybrid methodology incorporating an explorative weak learner (regularized basis function model) that fits high-level information about the response surface and an exploitative strong learner (based on committee machine) that fits finer details around promising regions identified by the weak learner is employed. For each design iteration, an aristocratic approach is used to select a set of nominees, where points that meet a threshold merit value as predicted by the weak learner are selected for evaluation. In addition to these points, the global optimum as predicted by the strong learner is also evaluated to enable rapid convergence to the actual global optimum once the most promising region has been identified by the optimizer. This methodology is first tested by applying it to the optimization of a two-dimensional multi-modal surface and, subsequently, to a complex internal combustion (IC) engine combustion optimization case with nine control parameters related to fuel injection, initial thermodynamic conditions, and in-cylinder flow. It is found that the new approach significantly lowers the number of function evaluations that are needed to reach the optimum design configuration (by up to 80%) when compared to conventional optimization techniques, such as particle swarm and genetic algorithm-based optimization techniques.


2015 ◽  
Vol 18 (3) ◽  
pp. 466-480 ◽  
Author(s):  
Onur Genc ◽  
Ali Dag

Developing a reliable data analytical method for predicting the velocity profile in small streams is important in that it substantially decreases the amount of money and effort spent on measurement procedures. In recent studies it has been shown that machine learning models can be used to achieve such an important goal. In the proposed framework, a tree-augmented Naïve Bayes approach, a member of the Bayesian network family, is employed to address the aforementioned two issues. Therefore, the proposed study presents novelty in that it explores the relations among the predictor attributes and derives a probabilistic risk score associated with the predictions. The data set of four key stations, in two different basins, are employed and the eight observational variables and calculated non-dimensional parameters were utilized as inputs to the models for estimating the response values, u (point velocities in measured verticals). The results showed that the proposed data-analytical approach yields comparable results when compared to the widely used, powerful machine learning algorithms. More importantly, novel information is gained through exploring the interrelations among the predictors as well as deriving a case-specific probabilistic risk score for the prediction accuracy. These findings can be utilized to help field engineers to improve their decision-making mechanism in small streams.


Risks ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 4 ◽  
Author(s):  
Christopher Blier-Wong ◽  
Hélène Cossette ◽  
Luc Lamontagne ◽  
Etienne Marceau

In the past 25 years, computer scientists and statisticians developed machine learning algorithms capable of modeling highly nonlinear transformations and interactions of input features. While actuaries use GLMs frequently in practice, only in the past few years have they begun studying these newer algorithms to tackle insurance-related tasks. In this work, we aim to review the applications of machine learning to the actuarial science field and present the current state of the art in ratemaking and reserving. We first give an overview of neural networks, then briefly outline applications of machine learning algorithms in actuarial science tasks. Finally, we summarize the future trends of machine learning for the insurance industry.


2021 ◽  
Vol 30 (1) ◽  
pp. 460-469
Author(s):  
Yinying Cai ◽  
Amit Sharma

Abstract In the agriculture development and growth, the efficient machinery and equipment plays an important role. Various research studies are involved in the implementation of the research and patents to aid the smart agriculture and authors and reviewers that machine leaning technologies are providing the best support for this growth. To explore machine learning technology and machine learning algorithms, the most of the applications are studied based on the swarm intelligence optimization. An optimized V3CFOA-RF model is built through V3CFOA. The algorithm is tested in the data set collected concerning rice pests, later analyzed and compared in detail with other existing algorithms. The research result shows that the model and algorithm proposed are not only more accurate in recognition and prediction, but also solve the time lagging problem to a degree. The model and algorithm helped realize a higher accuracy in crop pest prediction, which ensures a more stable and higher output of rice. Thus they can be employed as an important decision-making instrument in the agricultural production sector.


2019 ◽  
Vol 141 (4) ◽  
Author(s):  
Andrew S. Gillman ◽  
Kazuko Fuchi ◽  
Philip R. Buskohl

Origami folding provides a novel method to transform two-dimensional (2D) sheets into complex functional structures. However, the enormity of the foldable design space necessitates development of algorithms to efficiently discover new origami fold patterns with specific performance objectives. To address this challenge, this work combines a recently developed efficient modified truss finite element model with a ground structure-based topology optimization framework. A nonlinear mechanics model is required to model the sequenced motion and large folding common in the actuation of origami structures. These highly nonlinear motions limit the ability to define convex objective functions, and parallelizable evolutionary optimization algorithms for traversing nonconvex origami design problems are developed and considered. The ability of this framework to discover fold topologies that maximize targeted actuation is verified for the well-known “Chomper” and “Square Twist” patterns. A simple twist-based design is also discovered using the verified framework. Through these case studies, the role of critical points and bifurcations emanating from sequenced deformation mechanisms (including interplay of folding, facet bending, and stretching) on design optimization is analyzed. In addition, the performance of both gradient and evolutionary optimization algorithms are explored, and genetic algorithms (GAs) consistently yield solutions with better performance given the apparent nonconvexity of the response-design space.


Author(s):  
Aska E. Mehyadin ◽  
Adnan Mohsin Abdulazeez ◽  
Dathar Abas Hasan ◽  
Jwan N. Saeed

The bird classifier is a system that is equipped with an area machine learning technology and uses a machine learning method to store and classify bird calls. Bird species can be known by recording only the sound of the bird, which will make it easier for the system to manage. The system also provides species classification resources to allow automated species detection from observations that can teach a machine how to recognize whether or classify the species. Non-undesirable noises are filtered out of and sorted into data sets, where each sound is run via a noise suppression filter and a separate classification procedure so that the most useful data set can be easily processed. Mel-frequency cepstral coefficient (MFCC) is used and tested through different algorithms, namely Naïve Bayes, J4.8 and Multilayer perceptron (MLP), to classify bird species. J4.8 has the highest accuracy (78.40%) and is the best. Accuracy and elapsed time are (39.4 seconds).


Author(s):  
Jakub Gęca

The consequences of failures and unscheduled maintenance are the reasons why engineers have been trying to increase the reliability of industrial equipment for years. In modern solutions, predictive maintenance is a frequently used method. It allows to forecast failures and alert about their possibility. This paper presents a summary of the machine learning algorithms that can be used in predictive maintenance and comparison of their performance. The analysis was made on the basis of data set from Microsoft Azure AI Gallery. The paper presents a comprehensive approach to the issue including feature engineering, preprocessing, dimensionality reduction techniques, as well as tuning of model parameters in order to obtain the highest possible performance. The conducted research allowed to conclude that in the analysed case , the best algorithm achieved 99.92% accuracy out of over 122 thousand test data records. In conclusion, predictive maintenance based on machine learning represents the future of machine reliability in industry.


Sign in / Sign up

Export Citation Format

Share Document