scholarly journals Neural networks for full phase-space reweighting and parameter tuning

2020 ◽  
Vol 101 (9) ◽  
Author(s):  
Anders Andreassen ◽  
Benjamin Nachman
Author(s):  
Peetak Mitra ◽  
Niccolò Dal Santo ◽  
Majid Haghshenas ◽  
Shounak Mitra ◽  
Conor Daly ◽  
...  

The adoption of Machine Learning (ML) for building emulators for complex physical processes has seen an exponential rise in the recent years. While neural networks are good function approximators, optimizing the hyper-parameters of the network to reach a global minimum is not trivial, and often needs human knowl- edge and expertise. In this light, automatic ML or autoML methods have gained large interest as they automate the process of network hyper-parameter tuning. In addition, Neural Architecture Search (NAS) has shown promising outcomes for improving model performance. While autoML methods have grown in popularity for image, text and other applications, their effectiveness for high-dimensional, complex scientific datasets remains to be investigated. In this work, a data driven emulator for turbulence closure terms in the context of Large Eddy Simulation (LES) models is trained using Artificial Neural Networks and an autoML frame- work based on Bayesian Optimization, incorporating priors to jointly optimize the hyper-parameters as well as conduct a full neural network architecture search to converge to a global minima, is proposed. Additionally, we compare the effect of using different network weight initializations and optimizers such as ADAM, SGDM and RMSProp, to explore the best performing setting. Weight and function space similarities during the optimization trajectory are investigated, and critical differences in the learning process evolution are noted and compared to theory. We observe ADAM optimizer and Glorot initialization consistently performs better, while RMSProp outperforms SGDM as the latter appears to have been stuck at a local minima. Therefore, this autoML BayesOpt framework provides a means to choose the best hyper-parameter settings for a given dataset.


2020 ◽  
Vol 49 (4) ◽  
pp. 482-494
Author(s):  
Jurgita Kapočiūtė-Dzikienė ◽  
Senait Gebremichael Tesfagergish

Deep Neural Networks (DNNs) have proven to be especially successful in the area of Natural Language Processing (NLP) and Part-Of-Speech (POS) tagging—which is the process of mapping words to their corresponding POS labels depending on the context. Despite recent development of language technologies, low-resourced languages (such as an East African Tigrinya language), have received too little attention. We investigate the effectiveness of Deep Learning (DL) solutions for the low-resourced Tigrinya language of the Northern-Ethiopic branch. We have selected Tigrinya as the testbed example and have tested state-of-the-art DL approaches seeking to build the most accurate POS tagger. We have evaluated DNN classifiers (Feed Forward Neural Network – FFNN, Long Short-Term Memory method – LSTM, Bidirectional LSTM, and Convolutional Neural Network – CNN) on a top of neural word2vec word embeddings with a small training corpus known as Nagaoka Tigrinya Corpus. To determine the best DNN classifier type, its architecture and hyper-parameter set both manual and automatic hyper-parameter tuning has been performed. BiLSTM method was proved to be the most suitable for our solving task: it achieved the highest accuracy equal to 92% that is 65% above the random baseline.


2018 ◽  
Vol 51 (3) ◽  
pp. 443-449 ◽  
Author(s):  
Cecília M. Costa ◽  
Ittalo S. Silva ◽  
Rafael D. de Sousa ◽  
Renato A. Hortegal ◽  
Carlos Danilo M. Regis

Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 931 ◽  
Author(s):  
Ayham Zaitouny ◽  
Thomas Stemler ◽  
Shannon Algar

Positioning and tracking a moving target from limited positional information is a frequently-encountered problem. For given noisy observations of the target’s position, one wants to estimate the true trajectory and reconstruct the full phase space including velocity and acceleration. The shadowing filter offers a robust methodology to achieve such an estimation and reconstruction. Here, we highlight and validate important merits of this methodology for real-life applications. In particular, we explore the filter’s performance when dealing with correlated or uncorrelated noise, irregular sampling in time and how it can be optimised even when the true dynamics of the system are not known.


2018 ◽  
Vol 621 ◽  
pp. A13 ◽  
Author(s):  
Jovan Veljanoski ◽  
Amina Helmi ◽  
Maarten Breddels ◽  
Lorenzo Posti

Context. Extended stellar haloes are a natural by-product of the hierarchical formation of massive galaxies like the Milky Way. If merging is a non-negligible factor in the growth of our Galaxy, evidence of such events should be encoded in its stellar halo. The reliable identification of genuine halo stars is a challenging task, however. Aims. With the advent of the Gaia space telescope, we are ushered into a new era of Galactic astronomy. The first Gaia data release contains the positions, parallaxes, and proper motions for over two million stars, mostly in the solar neighbourhood. The second Gaia data release will enlarge this sample to over 1.5 billion stars, the brightest ~ 5 million of which will have full phase-space information. Our aim for this paper is to develop a machine learning model for reliably identifying halo stars, even when their full phase-space information is not available. Methods. We use the Gradient Boosted Trees algorithm to build a supervised halo star classifier. The classifier is trained on a sample of stars extracted from the Gaia Universe Model Snapshot, which is also convolved with the errors of the public TGAS data, which is a subset of Gaia DR1, as well as with the expected uncertainties for the upcoming Gaia DR2 catalogue. We also trained our classifier on a dataset resulting from the cross-match between the TGAS and RAVE catalogues, where the halo stars are labelled in an entirely model-independent way. We then use this model to identify halo stars in TGAS. Results. When full phase-space information is available and for Gaia DR2-like uncertainties, our classifier is able to recover 90% of the halo stars with at most 30% distance errors, in a completely unseen test set and with negligible levels of contamination. When line-of-sight velocity is not available, we recover ~ 60% of such halo stars, with less than 10% contamination. When applied to the TGAS catalogue, our classifier detects 337 high confidence red giant branch halo stars. At first glance this number may seem small, however, it is consistent with the expectation from the models, given the uncertainties in the data. The large parallax errors are in fact the biggest limitation in our ability to identify a large number of halo stars in all the cases studied.


2020 ◽  
Vol 2020 ◽  
pp. 1-24
Author(s):  
Aayushi Singla ◽  
M. Kaur

In continuation of our earlier work, in which we analysed the charged particle multiplicities in leptonic and hadronic interactions at different center-of-mass energies in full phase space as well as in restricted phase space using the shifted Gompertz distribution, a detailed analysis of the normalized moments and normalized factorial moments is reported here. A two-component model in which a probability distribution function is obtained from the superposition of two shifted Gompertz distributions, as introduced in our earlier work, has also been used for the analysis. This is the first analysis of the moments with the shifted Gompertz distribution. Analysis has also been performed to predict the moments of multiplicity distribution for the e+e− collisions at s=500 GeV at a future collider.


Sign in / Sign up

Export Citation Format

Share Document