scholarly journals Monte Carlo, fitting and Machine Learning for Tau leptons

Author(s):  
Vladimir Cherepanov ◽  
Elzbieta Richter-Was ◽  
Zbigniew Andrzej Was

Status of \tauτ lepton decay Monte Carlo generator TAUOLA, and its main recent applications are reviewed. It is underlined, that in recent efforts on development of new hadronic currents, the multi-dimensional nature of distributions of the experimental data must be taken with a great care. Studies for H \to \tau\tau ; \tau \to hadronsH→ττ;τ→hadrons indeed demonstrate that multi-dimensional nature of distributions is important and available for evaluation of observables where \tauτ leptons are used to constrain experimental data. For that part of the presentation, use of the TAUOLA program for phenomenology of HH and ZZ decays at LHC is discussed, in particular in the context of the Higgs boson parity measurements with the use of Machine Learning techniques. Some additions, relevant for QED lepton pair emission and electroweak corrections are mentioned as well.

2019 ◽  
Vol 211 ◽  
pp. 04006 ◽  
Author(s):  
Amy Lovell ◽  
Arvind Mohan ◽  
Patrick Talou ◽  
Michael Chertkov

Having accurate measurements of fission observables is important for a variety of applications, ranging from energy to non-proliferation, defense to astrophysics. Because not all of these data can be measured, it is necessary to be able to accurately calculate these observables as well. In this work, we exploit Monte Carlo and machine learning techniques to reproduce mass and kinetic energy yields, for phenomenological models and in a model-free way. We begin with the spontaneous fission of 252Cf, where there is abundant experimental data, to validate our approach, with the ultimate goal of creating a global yield model in order to predict quantities where data are not currently available.


2019 ◽  
Vol 9 (24) ◽  
pp. 5502 ◽  
Author(s):  
Baher Azzam ◽  
Freia Harzendorf ◽  
Ralf Schelenz ◽  
Walter Holweger ◽  
Georg Jacobs

White etching crack (WEC) failure is a failure mode that affects bearings in many applications, including wind turbine gearboxes, where it results in high, unplanned maintenance costs. WEC failure is unpredictable as of now, and its root causes are not yet fully understood. While WECs were produced under controlled conditions in several investigations in the past, converging the findings from the different combinations of factors that led to WECs in different experiments remains a challenge. This challenge is tackled in this paper using machine learning (ML) models that are capable of capturing patterns in high-dimensional data belonging to several experiments in order to identify influential variables to the risk of WECs. Three different ML models were designed and applied to a dataset containing roughly 700 high- and low-risk oil compositions to identify the constituting chemical compounds that make a given oil composition high-risk with respect to WECs. This includes the first application of a purpose-built neural network-based feature selection method. Out of 21 compounds, eight were identified as influential by models based on random forest and artificial neural networks. Association rules were also mined from the data to investigate the relationship between compound combinations and WEC risk, leading to results supporting those of previous analyses. In addition, the identified compound with the highest influence was proved in a separate investigation involving physical tests to be of high WEC risk. The presented methods can be applied to other experimental data where a high number of measured variables potentially influence a certain outcome and where there is a need to identify variables with the highest influence.


2021 ◽  
Vol 33 (4) ◽  
pp. 227-240
Author(s):  
Daria Igorevna Romanova

We calibrate the k − ε turbulence model for free surface flows in the channel or on the slope. To calibrate the turbulence model, an experiment is carried out in an inclined rectangular research tray. In the experiment, the pressure values in the flow are measured at different distances from the bottom using a Pitot tube; after transforming data, the flow velocity profile is obtained. The k − ε turbulence model is calibrated based on experimental data using the Nelder-Mead optimization algorithm. The calibrated turbulence model is then used to calculate the outburst of a lake near the glacier Maliy Azau on the Elbrus (Central Caucasus).


2020 ◽  
Author(s):  
Clayton Eduardo Rodrigues ◽  
Cairo Lúcio Nascimento Júnior ◽  
Domingos Alves Rade

A comparative analysis of machine learning techniques for rotating machine faults diagnosis based on vibration spectra images is presented. The feature extraction of dierent types of faults, such as unbalance, misalignment, shaft crack, rotor-stator rub, and hydrodynamic instability, is performed by processing the spectral image of vibration orbits acquired during the rotating machine run-up. The classiers are trained with simulation data and tested with both simulation and experimental data. The experimental data are obtained from measurements performed on an rotor-disk system test rig supported on hydrodynamic bearings. To generate the simulated data, a numerical model of the rotating system is developed using the Finite Element Method (FEM). Deep learning, ensemble and traditional classication methods are evaluated. The ability of the methods to generalize the image classication is evaluated based on their performance in classifying experimental test patterns that were not used during training. The obtained results suggest that despite considerable computational cost, the method based on Convolutional Neural Network (CNN) presents the best performance for classication of faults based on spectral images.


2018 ◽  
Author(s):  
Ljubisa Miskovic ◽  
Jonas Béal ◽  
Michael Moret ◽  
Vassily Hatzimanikatis

AbstractA persistent obstacle for constructing kinetic models of metabolism is uncertainty in the kinetic properties of enzymes. Currently, available methods for building kinetic models can cope indirectly with uncertainties by integrating data from different biological levels and origins into models. In this study, we use the recently proposed computational approach iSCHRUNK (in Silico Approach to Characterization and Reduction of Uncertainty in the Kinetic Models), which combines Monte Carlo parameter sampling methods and machine learning techniques, in the context of Bayesian inference. Monte Carlo parameter sampling methods allow us to exploit synergies between different data sources and generate a population of kinetic models that are consistent with the available data and physicochemical laws. The machine learning allows us to data-mine the a priori generated kinetic parameters together with the integrated datasets and derive posterior distributions of kinetic parameters consistent with the observed physiology. In this work, we used iSCHRUNK to address a design question: can we identify which are the kinetic parameters and what are their values that give rise to a desired metabolic behavior? Such information is important for a wide variety of studies ranging from biotechnology to medicine. To illustrate the proposed methodology, we performed Metabolic Control Analysis, computed the flux control coefficients of the xylose uptake (XTR), and identified parameters that ensure a rate improvement of XTR in a glucose-xylose co-utilizing S. cerevisiae strain. Our results indicate that only three kinetic parameters need to be accurately characterized to describe the studied physiology, and ultimately to design and control the desired responses of the metabolism. This framework paves the way for a new generation of methods that will systematically integrate the wealth of available omics data and efficiently extract the information necessary for metabolic engineering and synthetic biology decisions.Author SummaryKinetic models are the most promising tool for understanding the complex dynamic behavior of living cells. The primary goal of kinetic models is to capture the properties of the metabolic networks as a whole, and thus we need large-scale models for dependable in silico analyses of metabolism. However, uncertainty in kinetic parameters impedes the development of kinetic models, and uncertainty levels increase with the model size. Tools that will address the issues with parameter uncertainty and that will be able to reduce the uncertainty propagation through the system are therefore needed. In this work, we applied a method called iSCHRUNK that combines parameter sampling and machine learning techniques to characterize the uncertainties and uncover intricate relationships between the parameters of kinetic models and the responses of the metabolic network. The proposed method allowed us to identify a small number of parameters that determine the responses in the network regardless of the values of other parameters. As a consequence, in future studies of metabolism, it will be sufficient to explore a reduced kinetic space, and more comprehensive analyses of large-scale and genome-scale metabolic networks will be computationally tractable.


2006 ◽  
Author(s):  
Christopher Schreiner ◽  
Kari Torkkola ◽  
Mike Gardner ◽  
Keshu Zhang

2020 ◽  
Vol 12 (2) ◽  
pp. 84-99
Author(s):  
Li-Pang Chen

In this paper, we investigate analysis and prediction of the time-dependent data. We focus our attention on four different stocks are selected from Yahoo Finance historical database. To build up models and predict the future stock price, we consider three different machine learning techniques including Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN) and Support Vector Regression (SVR). By treating close price, open price, daily low, daily high, adjusted close price, and volume of trades as predictors in machine learning methods, it can be shown that the prediction accuracy is improved.


Sign in / Sign up

Export Citation Format

Share Document