Comparison of JET-C DD neutron rates independently predicted by the ASCOTand TRANSP Monte Carlo heating codes

2021 ◽  
Author(s):  
Henri Weisen ◽  
Paula Sirén ◽  
Jari Varje

Abstract Simulations of the DD neutron rates predicted by the ASCOT and TRANSP Monte Carlo heating codes for a diverse set of JET-C (JET with carbon plasma facing components) plasmas are compared. A previous study [1] of this data set using TRANSP found that the predicted neutron rates systematically exceeded the measured ones by factors ranging between 1 and 2. No single explanation for the discrepancies was found at the time despite a large number of candidates, including anomalous fast ion loss mechanisms, having been examined. The results shed doubt on our ability to correctly predict neutron rates also in the Deuterium-Tritium plasmas expected in the JET D-T campaign (DTE2). For the study presented here the calculations are independently repeated using ASCOT with different equilibria and independent mapping of the profiles of temperature and density to the computational grid. Significant differences are observed between the results from the investigations with smaller systematic differences between neutron rates measurements and predictions for the ones using ASCOT. These are traced back not to intrinsic differences between the ASCOT and TRANSP codes, but to the differences in profiles and equilibria used. These results suggest that the discrepancies reported in ref[1] do not require invoking any unidentified plasma processes responsible for the discrepancies and highlight the sensitivity of such calculations to the plasma equilibrium and the necessity of a careful mapping of the profiles of the ion and electron densities and temperatures.

2021 ◽  
Author(s):  
Filippo Zonta ◽  
Lucia Sanchis ◽  
Eero Hirvijoki

Abstract This paper presents a novel scheme to improve the statistics of simulated fast-ion loss signals and power loads to plasma-facing components in fusion devices. With the so-called Backward Monte Carlo method, the probabilities of marker particles reaching a chosen target surface can be approximately traced from the target back into the plasma. Utilizing the probabilities as {\it a priori} information for the well-established Forward Monte Carlo method, statistics in fast-ion simulations are significantly improved. For testing purposes, the scheme has been implemented to the ASCOT suite of codes and applied to a realistic ASDEX Upgrade configuration of beam-ion distributions.


2004 ◽  
Vol 2004 (8) ◽  
pp. 421-429 ◽  
Author(s):  
Souad Assoudou ◽  
Belkheir Essebbar

This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC) techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.


2020 ◽  
Vol 9 (1) ◽  
pp. 47-60
Author(s):  
Samir K. Ashour ◽  
Ahmed A. El-Sheikh ◽  
Ahmed Elshahhat

In this paper, the Bayesian and non-Bayesian estimation of a two-parameter Weibull lifetime model in presence of progressive first-failure censored data with binomial random removals are considered. Based on the s-normal approximation to the asymptotic distribution of maximum likelihood estimators, two-sided approximate confidence intervals for the unknown parameters are constructed. Using gamma conjugate priors, several Bayes estimates and associated credible intervals are obtained relative to the squared error loss function. Proposed estimators cannot be expressed in closed forms and can be evaluated numerically by some suitable iterative procedure. A Bayesian approach is developed using Markov chain Monte Carlo techniques to generate samples from the posterior distributions and in turn computing the Bayes estimates and associated credible intervals. To analyze the performance of the proposed estimators, a Monte Carlo simulation study is conducted. Finally, a real data set is discussed for illustration purposes.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA41-WA52 ◽  
Author(s):  
Dario Grana ◽  
Leonardo Azevedo ◽  
Mingliang Liu

Among the large variety of mathematical and computational methods for estimating reservoir properties such as facies and petrophysical variables from geophysical data, deep machine-learning algorithms have gained significant popularity for their ability to obtain accurate solutions for geophysical inverse problems in which the physical models are partially unknown. Solutions of classification and inversion problems are generally not unique, and uncertainty quantification studies are required to quantify the uncertainty in the model predictions and determine the precision of the results. Probabilistic methods, such as Monte Carlo approaches, provide a reliable approach for capturing the variability of the set of possible models that match the measured data. Here, we focused on the classification of facies from seismic data and benchmarked the performance of three different algorithms: recurrent neural network, Monte Carlo acceptance/rejection sampling, and Markov chain Monte Carlo. We tested and validated these approaches at the well locations by comparing classification predictions to the reference facies profile. The accuracy of the classification results is defined as the mismatch between the predictions and the log facies profile. Our study found that when the training data set of the neural network is large enough and the prior information about the transition probabilities of the facies in the Monte Carlo approach is not informative, machine-learning methods lead to more accurate solutions; however, the uncertainty of the solution might be underestimated. When some prior knowledge of the facies model is available, for example, from nearby wells, Monte Carlo methods provide solutions with similar accuracy to the neural network and allow a more robust quantification of the uncertainty, of the solution.


Geophysics ◽  
1994 ◽  
Vol 59 (4) ◽  
pp. 577-590 ◽  
Author(s):  
Side Jin ◽  
Raul Madariaga

Seismic reflection data contain information on small‐scale impedance variations and a smooth reference velocity model. Given a reference velocity model, the reflectors can be obtained by linearized migration‐inversion. If the reference velocity is incorrect, the reflectors obtained by inverting different subsets of the data will be incoherent. We propose to use the coherency of these images to invert for the background velocity distribution. We have developed a two‐step iterative inversion method in which we separate the retrieval of small‐scale variations of the seismic velocity from the longer‐period reference velocity model. Given an initial background velocity model, we use a waveform misfit‐functional for the inversion of small‐scale velocity variations. For this linear step we use the linearized migration‐inversion method based on ray theory that we have recently developed with Lambaré and Virieux. The reference velocity model is then updated by a Monte Carlo inversion method. For the nonlinear inversion of the velocity background, we introduce an objective functional that measures the coherency of the short wavelength components obtained by inverting different common shot gathers at the same locations. The nonlinear functional is calculated directly in migrated data space to avoid expensive numerical forward modeling by finite differences or ray theory. Our method is somewhat similar to an iterative migration velocity analysis, but we do an automatic search for relatively large‐scale 1-D reference velocity models. We apply the nonlinear inversion method to a marine data set from the North Sea and also show that nonlinear inversion can be applied to realistic scale data sets to obtain a laterally heterogeneous velocity model with a reasonable amount of computer time.


2019 ◽  
Vol 35 (3) ◽  
pp. 1373-1392 ◽  
Author(s):  
Dong Ding ◽  
Axel Gandy ◽  
Georg Hahn

Abstract We consider a statistical test whose p value can only be approximated using Monte Carlo simulations. We are interested in deciding whether the p value for an observed data set lies above or below a given threshold such as 5%. We want to ensure that the resampling risk, the probability of the (Monte Carlo) decision being different from the true decision, is uniformly bounded. This article introduces a simple open-ended method with this property, the confidence sequence method (CSM). We compare our approach to another algorithm, SIMCTEST, which also guarantees an (asymptotic) uniform bound on the resampling risk, as well as to other Monte Carlo procedures without a uniform bound. CSM is free of tuning parameters and conservative. It has the same theoretical guarantee as SIMCTEST and, in many settings, similar stopping boundaries. As it is much simpler than other methods, CSM is a useful method for practical applications.


2015 ◽  
Vol 2015 ◽  
pp. 1-12
Author(s):  
Mohammed Alguraibawi ◽  
Habshah Midi ◽  
A. H. M. Rahmatullah Imon

Identification of high leverage point is crucial because it is responsible for inaccurate prediction and invalid inferential statement as it has a larger impact on the computed values of various estimates. It is essential to classify the high leverage points into good and bad leverage points because only the bad leverage points have an undue effect on the parameter estimates. It is now evident that when a group of high leverage points is present in a data set, the existing robust diagnostic plot fails to classify them correctly. This problem is due to the masking and swamping effects. In this paper, we propose a new robust diagnostic plot to correctly classify the good and bad leverage points by reducing both masking and swamping effects. The formulation of the proposed plot is based on the Modified Generalized Studentized Residuals. We investigate the performance of our proposed method by employing a Monte Carlo simulation study and some well-known data sets. The results indicate that the proposed method is able to improve the rate of detection of bad leverage points and also to reduce swamping and masking effects.


Sign in / Sign up

Export Citation Format

Share Document