monte carlo simulation study
Recently Published Documents


TOTAL DOCUMENTS

1028
(FIVE YEARS 254)

H-INDEX

47
(FIVE YEARS 7)

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Tahani A. Abushal ◽  
A. A. Soliman ◽  
G. A. Abd-Elmougod

The problem of statistical inference under joint censoring samples has received considerable attention in the past few years. In this paper, we adopted this problem when units under the test fail with different causes of failure which is known by the competing risks model. The model is formulated under consideration that only two independent causes of failure and the unit are collected from two lines of production and its life distributed with Burr XII lifetime distribution. So, under Type-I joint competing risks samples, we obtained the maximum likelihood (ML) and Bayes estimators. Interval estimation is discussed through asymptotic confidence interval, bootstrap confidence intervals, and Bayes credible interval. The numerical computations which described the quality of theoretical results are discussed in the forms of real data analyzed and Monte Carlo simulation study. Finally, numerical results are discussed and listed through some points as a brief comment.


Entropy ◽  
2021 ◽  
Vol 24 (1) ◽  
pp. 9
Author(s):  
Muhammed Rasheed Irshad ◽  
Radhakumari Maya ◽  
Francesco Buono ◽  
Maria Longobardi

Tsallis introduced a non-logarithmic generalization of Shannon entropy, namely Tsallis entropy, which is non-extensive. Sati and Gupta proposed cumulative residual information based on this non-extensive entropy measure, namely cumulative residual Tsallis entropy (CRTE), and its dynamic version, namely dynamic cumulative residual Tsallis entropy (DCRTE). In the present paper, we propose non-parametric kernel type estimators for CRTE and DCRTE where the considered observations exhibit an ρ-mixing dependence condition. Asymptotic properties of the estimators were established under suitable regularity conditions. A numerical evaluation of the proposed estimator is exhibited and a Monte Carlo simulation study was carried out.


Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3328
Author(s):  
Chien-Tai Lin ◽  
Yu Liu ◽  
Yun-Wei Li ◽  
Zhi-Wei Chen ◽  
Hassan M. Okasha

The recent exponentiated generalized linear exponential distribution is a generalization of the generalized linear exponential distribution and the exponentiated generalized linear exponential distribution. In this paper, we study some statistical properties of this distribution such as negative moments, moments of order statistics, mean residual lifetime, and their asymptotic distributions for sample extreme order statistics. Different estimation procedures include the maximum likelihood estimation, the corrected maximum likelihood estimation, the modified maximum likelihood estimation, the maximum product of spacing estimation, and the least squares estimation are compared via a Monte Carlo simulation study in terms of their biases, mean squared errors, and their rates of obtaining reliable estimates. Recommendations are made from the simulation results and a numerical example is presented to illustrate its use for modeling a rainfall data from Orlando, Florida.


2021 ◽  
Author(s):  
Jessica L Fossum ◽  
Amanda Kay Montoya

Several options exist for conducting inference on indirect effects in mediation analysis. While methods which use bootstrapping are the preferred inferential approach for testing mediation, they are time consuming when the test must be performed many times for a power analysis. Alternatives which are more computationally efficient are not as robust, meaning accuracy of the inferences from these methods are more affected by nonnormal and heteroskedastic data (Biesanz et al., 2010). While previous research focused on how different sample sizes would be needed to achieve the same amount of power for different inferential approaches (Fritz & MacKinnon, 2007), we explore how similar power estimates are at the same sample size. We compare the power estimates from six tests using a Monte Carlo simulation study, varying the path coefficients and tests of the indirect effect. If tests produce similar power estimates, the more computationally efficient test could be used for power analysis and the more intensive test involving resampling can be used for data analysis. We found that when the assumptions of linear regression are met, three tests consistently perform similarly: the joint significance test, the Monte Carlo confidence interval, and the percentile bootstrap confidence interval. Based on these results, we recommend using the more computationally efficient joint significance test for power analysis then using the percentile bootstrap confidence interval for the data analysis.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Rashad M. El-Sagheer ◽  
Taghreed M. Jawa ◽  
Neveen Sayed-Ahmed

In this article, we consider estimation of the parameters of a generalized Pareto distribution and some lifetime indices such as those relating to reliability and hazard rate functions when the failure data are progressive first-failure censored. Both classical and Bayesian techniques are obtained. In the Bayesian framework, the point estimations of unknown parameters under both symmetric and asymmetric loss functions are discussed, after having been estimated using the conjugate gamma and discrete priors for the shape and scale parameters, respectively. In addition, both exact and approximate confidence intervals as well as the exact confidence region for the estimators are constructed. A practical example using a simulated data set is analyzed. Finally, the performance of Bayes estimates is compared with that of maximum likelihood estimates through a Monte Carlo simulation study.


Diagnostics ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 2275
Author(s):  
Ching-Ching Yang

This study aimed to investigate the feasibility of positron range correction based on three different convolutional neural network (CNN) models in preclinical PET imaging of Ga-68. The first model (CNN1) was originally designed for super-resolution recovery, while the second model (CNN2) and the third model (CNN3) were originally designed for pseudo CT synthesis from MRI. A preclinical PET scanner and 30 phantom configurations were modeled in Monte Carlo simulations, where each phantom configuration was simulated twice, once for Ga-68 (CNN input images) and once for back-to-back 511-keV gamma rays (CNN output images) with a 20 min emission scan duration. The Euclidean distance was used as the loss function to minimize the difference between CNN input and output images. According to our results, CNN3 outperformed CNN1 and CNN2 qualitatively and quantitatively. With regard to qualitative observation, it was found that boundaries in Ga-68 images became sharper after correction. As for quantitative analysis, the recovery coefficient (RC) and spill-over ratio (SOR) were increased after correction, while no substantial increase in coefficient of variation of RC (CVRC) or coefficient of variation of SOR (CVSOR) was observed. Overall, CNN3 should be a good candidate architecture for positron range correction in Ga-68 preclinical PET imaging.


2021 ◽  
Author(s):  
Andrew Chacon ◽  
Marissa Kielly ◽  
Harley Rutherford ◽  
Daniel R. Franklin ◽  
Anita Caracciolo ◽  
...  

Abstract Neutron Capture Enhanced Particle Therapy (NCEPT) boosts the effectiveness of particle therapy by capturing thermal neutrons produced by beam-target nuclear interactions in and around the treatment site, using tumour-specific 10B or 157Gd-based neutron capture agents. Neutron captures release high-LET secondary particles together with prompt gamma photons with energies of 478 keV (10B) or 7.94 MeV (157Gd). A key requirement for NCEPT’s translation is the development of in vivo dosimetry techniques which can measure both the direct ion dose and the dose due to neutron capture. In this work, we report signatures which can be used to discriminate between photons resulting from neutron capture and those originating from other processes. A Geant4 Monte Carlo simulation study into timing and energy thresholds for discrimination of prompt gamma photons resulting from thermal neutron capture during NCEPT was conducted. Three simulated 300×300×300 mm3 cubic PMMA targets were irradiated by 4He or 12C ion beams with a spread out Bragg peak (SOBP) depth range of 60 mm; one target is homogeneous while the others include 10×10×10 mm3 neutron capture inserts (NCIs) of pure 10B or 157Gd located at the distal edge of the SOBP. The arrival times of photons and neutrons entering a simulated 50×50×50 mm3 ideal detector were recorded. The majority of photons resulting from neutron capture were found to arrive at the detector at least 60 ns later than photons created by other processes. A range of candidate detector and thermal neutron shielding materials were simulated, and detections meeting the proposed acceptance criteria (i.e. falling within the target energy window and arriving 60 ns post beam-off) were classified as true or false positives, depending on their origin. The ratio of true / false positives (RTF) was calculated; for targets with 10B and 157Gd NCIs, the detector materials which resulted in the highest RTF were cadmium-shielded CdTe and boron-shielded LSO, respectively. The optimal irradiation period for both carbon and helium ions was 1 µs for the 10B NCI and 1 ms for the 157Gd NCI.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
M. Tisi ◽  
V. Mares ◽  
J. Schreiber ◽  
F. S. Englbrecht ◽  
W. Rühm

AbstractAt the Center for Advanced Laser Applications (CALA), Garching, Germany, the LION (Laser-driven ION Acceleration) experiment is being commissioned, aiming at the production of laser-driven bunches of protons and light ions with multi-MeV energies and repetition frequency up to 1 Hz. A Geant4 Monte Carlo-based study of the secondary neutron and photon fields expected during LION’s different commissioning phases is presented. Goal of this study is the characterization of the secondary radiation environment present inside and outside the LION cave. Three different primary proton spectra, taken from experimental results reported in the literature and representative of three different future stages of the LION’s commissioning path are used. Together with protons, also electrons are emitted through laser-target interaction and are also responsible for the production of secondary radiation. For the electron component of the three source terms, a simplified exponential model is used. Moreover, in order to reduce the simulation complexity, a two-components simplified geometrical model of proton and electron sources is proposed. It has been found that the radiation environment inside the experimental cave is either dominated by photons or neutrons depending on the position in the room and the source term used. The higher the intensity of the source, the higher the neutron contribution to the total dose for all scored positions. Maximum neutron and photon ambient dose equivalent values normalized to 109 simulated incident primaries were calculated at the exit of the vacuum chamber, where values of about 85 nSv (109 primaries)−1 and 1.0 μSv (109 primaries)−1 were found.


2021 ◽  
Vol 73 (6) ◽  
pp. 1391-1402
Author(s):  
S. Genç ◽  
M. Mendeş

ABSTRACT This study was carried out for two purposes: comparing performances of Regression Tree and Automatic Linear Modeling and determining optimum sample size for these methods under different experimental conditions. A comprehensive Monte Carlo Simulation Study was designed for these purposes. Results of simulation study showed that percentage of explained variation estimates of both Regression Tree and Automatic Linear Modeling was influenced by sample size, number of variables, and structure of variance-covariance matrix. Automatic Linear Modeling had higher performance than Regression Tree under all experimental conditions. It was concluded that the Regression Tree required much larger samples to make stable estimates when comparing to Automatic Linear Modeling.


Sign in / Sign up

Export Citation Format

Share Document