scholarly journals Applying machine learning optimization methods to the production of a quantum gas

2020 ◽  
Vol 1 (1) ◽  
pp. 015007 ◽  
Author(s):  
A J Barker ◽  
H Style ◽  
K Luksch ◽  
S Sunami ◽  
D Garrick ◽  
...  
Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1578
Author(s):  
Daniel Szostak ◽  
Adam Włodarczyk ◽  
Krzysztof Walkowiak

Rapid growth of network traffic causes the need for the development of new network technologies. Artificial intelligence provides suitable tools to improve currently used network optimization methods. In this paper, we propose a procedure for network traffic prediction. Based on optical networks’ (and other network technologies) characteristics, we focus on the prediction of fixed bitrate levels called traffic levels. We develop and evaluate two approaches based on different supervised machine learning (ML) methods—classification and regression. We examine four different ML models with various selected features. The tested datasets are based on real traffic patterns provided by the Seattle Internet Exchange Point (SIX). Obtained results are analyzed using a new quality metric, which allows researchers to find the best forecasting algorithm in terms of network resources usage and operational costs. Our research shows that regression provides better results than classification in case of all analyzed datasets. Additionally, the final choice of the most appropriate ML algorithm and model should depend on the network operator expectations.


2021 ◽  
Vol 11 (4) ◽  
pp. 1627
Author(s):  
Yanbin Li ◽  
Gang Lei ◽  
Gerd Bramerdorfer ◽  
Sheng Peng ◽  
Xiaodong Sun ◽  
...  

This paper reviews the recent developments of design optimization methods for electromagnetic devices, with a focus on machine learning methods. First, the recent advances in multi-objective, multidisciplinary, multilevel, topology, fuzzy, and robust design optimization of electromagnetic devices are overviewed. Second, a review is presented to the performance prediction and design optimization of electromagnetic devices based on the machine learning algorithms, including artificial neural network, support vector machine, extreme learning machine, random forest, and deep learning. Last, to meet modern requirements of high manufacturing/production quality and lifetime reliability, several promising topics, including the application of cloud services and digital twin, are discussed as future directions for design optimization of electromagnetic devices.


2022 ◽  
Author(s):  
Andrea Angulo ◽  
Lankun Yang ◽  
Eray S Aydil ◽  
Miguel A. Modestino

Autonomous chemical process development and optimization methods use algorithms to explore the operating parameter space based on feedback from experimentally determined exit stream compositions. Measuring the compositions of multicomponent streams...


2019 ◽  
Vol 141 (9) ◽  
Author(s):  
Daniel M. Probst ◽  
Mandhapati Raju ◽  
Peter K. Senecal ◽  
Janardhan Kodavasal ◽  
Pinaki Pal ◽  
...  

This work evaluates different optimization algorithms for computational fluid dynamics (CFD) simulations of engine combustion. Due to the computational expense of CFD simulations, emulators built with machine learning algorithms were used as surrogates for the optimizers. Two types of emulators were used: a Gaussian process (GP) and a weighted variety of machine learning methods called SuperLearner (SL). The emulators were trained using a dataset of 2048 CFD simulations that were run concurrently on a supercomputer. The design of experiments (DOE) for the CFD runs was obtained by perturbing nine input parameters using a Monte-Carlo method. The CFD simulations were of a heavy duty engine running with a low octane gasoline-like fuel at a partially premixed compression ignition mode. Ten optimization algorithms were tested, including types typically used in research applications. Each optimizer was allowed 800 function evaluations and was randomly tested 100 times. The optimizers were evaluated for the median, minimum, and maximum merits obtained in the 100 attempts. Some optimizers required more sequential evaluations, thereby resulting in longer wall clock times to reach an optimum. The best performing optimization methods were particle swarm optimization (PSO), differential evolution (DE), GENOUD (an evolutionary algorithm), and micro-genetic algorithm (GA). These methods found a high median optimum as well as a reasonable minimum optimum of the 100 trials. Moreover, all of these methods were able to operate with less than 100 successive iterations, which reduced the wall clock time required in practice. Two methods were found to be effective but required a much larger number of successive iterations: the DIRECT and MALSCHAINS algorithms. A random search method that completed in a single iteration performed poorly in finding optimum designs but was included to illustrate the limitation of highly concurrent search methods. The last three methods, Nelder–Mead, bound optimization by quadratic approximation (BOBYQA), and constrained optimization by linear approximation (COBYLA), did not perform as well.


2020 ◽  
Vol 117 (44) ◽  
pp. 27162-27170
Author(s):  
Adityanarayanan Radhakrishnan ◽  
Mikhail Belkin ◽  
Caroline Uhler

Identifying computational mechanisms for memorization and retrieval of data is a long-standing problem at the intersection of machine learning and neuroscience. Our main finding is that standard overparameterized deep neural networks trained using standard optimization methods implement such a mechanism for real-valued data. We provide empirical evidence that 1) overparameterized autoencoders store training samples as attractors and thus iterating the learned map leads to sample recovery, and that 2) the same mechanism allows for encoding sequences of examples and serves as an even more efficient mechanism for memory than autoencoding. Theoretically, we prove that when trained on a single example, autoencoders store the example as an attractor. Lastly, by treating a sequence encoder as a composition of maps, we prove that sequence encoding provides a more efficient mechanism for memory than autoencoding.


2020 ◽  
Author(s):  
Kate Higgins ◽  
Sai Mani Valleti ◽  
Maxim Ziatdinov ◽  
Sergei Kalinin ◽  
Mahshid Ahmadi

<p>Hybrid organic-inorganic perovskites have attracted immense interest as a promising material for the next-generation solar cells; however, issues regarding long-term stability still require further study. Here, we develop automated experimental workflow based on combinatorial synthesis and rapid throughput characterization to explore long-term stability of these materials in ambient conditions, and apply it to four model perovskite systems: MA<sub>x</sub>FA<sub>y</sub>Cs<sub>1-x-y</sub>PbBr<sub>3</sub>, MA<sub>x</sub>FA<sub>y</sub>Cs<sub>1-x-y</sub>PbI<sub>3</sub>, (Cs<sub>x</sub>FA<sub>y</sub>MA<sub>1-x-y</sub>Pb(Br<sub>x+y</sub>I<sub>1-x-y</sub>)<sub>3</sub>) and (Cs<sub>x</sub>MA<sub>y</sub>FA<sub>1-x-y</sub>Pb(I<sub>x+y</sub>Br<sub>1-x-y</sub>)<sub>3</sub>). We also develop a machine learning-based workflow to quantify the evolution of each system as a function of composition based on overall changes in photoluminescence spectra, as well as specific peak positions and intensities. We find the stability dependence on composition to be extremely non-uniform within the composition space, suggesting the presence of potential preferential compositional regions. This proposed workflow is universal and can be applied to other perovskite systems and solution-processable materials. Furthermore, incorporation of experimental optimization methods, e.g., those based on Gaussian Processes, will enable the transition from combinatorial synthesis to guide materials research and optimization.</p>


Author(s):  
Daniel M. Probst ◽  
Mandhapati Raju ◽  
Peter K. Senecal ◽  
Janardhan Kodavasal ◽  
Pinaki Pal ◽  
...  

This work evaluates different optimization algorithms for Computational Fluid Dynamics (CFD) simulations of engine combustion. Due to the computational expense of CFD simulations, emulators built with machine learning algorithms were used as surrogates for the optimizers. Two types of emulators were used: a Gaussian Process (GP) and a weighted variety of machine learning methods called SuperLearner (SL). The emulators were trained using a dataset of 2048 CFD simulations that were run concurrently on a supercomputer. The Design of Experiments (DOE) for the CFD runs was obtained by perturbing nine input parameters using a Monte Carlo method. The CFD simulations were of a heavy duty engine running with a low octane gasoline-like fuel at a partially premixed compression ignition mode. Ten optimization algorithms were tested, including types typically used in research applications. Each optimizer was allowed 800 function evaluations and was randomly tested 100 times. The optimizers were evaluated for the median, minimum, and maximum merits obtained in the 100 attempts. Some optimizers required more sequential evaluations, thereby resulting in longer wall clock times to reach an optimum. The best performing optimization methods were particle swarm optimization (PSO), differential evolution (DE), GENOUD (an evolutionary algorithm), and Micro-Genetic Algorithm (GA). These methods found a high median optimum as well as a reasonable minimum optimum of the 100 trials. Moreover, all of these methods were able to operate with less than 100 successive iterations, which reduced the wall clock time required in practice. Two methods were found to be effective but required a much larger number of successive iterations: the DIRECT and MALSCHAINS algorithms. A random search method that completed in a single iteration performed poorly in finding 1 Currently at Southwest Research Institute, San Antonio, Texas optimum designs, but was included to illustrate the limitation of highly concurrent search methods. The last three methods, Nelder-Mead, BOBYQA, and COBYLA, did not perform as well.


Materials ◽  
2020 ◽  
Vol 14 (1) ◽  
pp. 163
Author(s):  
Muhammad Arif Mahmood ◽  
Anita Ioana Visan ◽  
Carmen Ristoscu ◽  
Ion N. Mihailescu

Additive manufacturing with an emphasis on 3D printing has recently become popular due to its exceptional advantages over conventional manufacturing processes. However, 3D printing process parameters are challenging to optimize, as they influence the properties and usage time of printed parts. Therefore, it is a complex task to develop a correlation between process parameters and printed parts’ properties via traditional optimization methods. A machine-learning technique was recently validated to carry out intricate pattern identification and develop a deterministic relationship, eliminating the need to develop and solve physical models. In machine learning, artificial neural network (ANN) is the most widely utilized model, owing to its capability to solve large datasets and strong computational supremacy. This study compiles the advancement of ANN in several aspects of 3D printing. Challenges while applying ANN in 3D printing and their potential solutions are indicated. Finally, upcoming trends for the application of ANN in 3D printing are projected.


Sign in / Sign up

Export Citation Format

Share Document