scholarly journals A THIRD GENERATION TOMOGRAPHY SYSTEM WITH FIFTEEN DETECTORS SIMULATED BY MONTE CARLO METHOD

2019 ◽  
Vol 7 (2A) ◽  
Author(s):  
Alexandre Gimenez Alvarez ◽  
Alexandre França Velo ◽  
Vagner Fernandez ◽  
Samir L. Somessari ◽  
Francisco F. Sprenger ◽  
...  

This paper describes the Monte Carlo simulation, using MCNP4C, of a multichannel third generation tomography system containing a two radioactive sources 192I (316.5 – 468 KeV) and 137Cs (662 KeV), and a set of fifteen NaI(Tl) detectors, with dimensions of 1 inch diameter and  2 inches thick, in fan beam geometry, positioned diametrically opposite. Each detector moves 10 steps of 0,24o, totalizing 150 virtual detectors per projection, and then the system rotate 2 degrees. The Monte Carlo simulation was performed to evaluate the viability of this configuration. For this, a multiphase phantom containing polymethyl methacralate (PMMA ((r @ 1.19 g/cm3)), iron (r @ 7.874 g/cm3), aluminum (r @ 2.6989 g/cm3) and air (r @ 1.20479E-03 g/cm3) was simulated. The simulated number of histories was 1.1E+09 per projection and the tally used were the F8, which gives the pulse height of each detector. The data obtained by the simulation was used to reconstruct the simulated phantom using the statistical iterative Maximum Likelihood Estimation Method Technique (ML-EM) algorithm. Each detector provides a gamma spectrum of the sources, and a pulse height analyzer (PHA) of 10% on the 316.5 KeV and 662 KeV photopeaks was performed. This technique provides two reconstructed images of the simulated phantom. The reconstructed images provided high spatial resolution, and it is supposed that the temporal resolution (spending time for one complete revolution) is about 2.5 hours.

2006 ◽  
Vol 3 (4) ◽  
pp. 1603-1627 ◽  
Author(s):  
W. Wang ◽  
P. H. A. J. M. van Gelder ◽  
J. K. Vrijling ◽  
X. Chen

Abstract. The Lo's R/S tests (Lo, 1991), GPH test (Geweke and Porter-Hudak, 1983) and the maximum likelihood estimation method implemented in S-Plus (S-MLE) are evaluated through intensive Mote Carlo simulations for detecting the existence of long-memory. It is shown that, it is difficult to find an appropriate lag q for Lo's test for different AR and ARFIMA processes, which makes the use of Lo's test very tricky. In general, the GPH test outperforms the Lo's test, but for cases where there is strong autocorrelations (e.g., AR(1) processes with φ=0.97 or even 0.99), the GPH test is totally useless, even for time series of large data size. Although S-MLE method does not provide a statistic test for the existence of long-memory, the estimates of d given by S-MLE seems to give a good indication of whether or not the long-memory is present. Data size has a significant impact on the power of all the three methods. Generally, the power of Lo's test and GPH test increases with the increase of data size, and the estimates of d with GPH test and S-MLE converge with the increase of data size. According to the results with the Lo's R/S test (Lo, 1991), GPH test (Geweke and Porter-Hudak, 1983) and the S-MLE method, all daily flow series exhibit long-memory. The intensity of long-memory in daily streamflow processes has only a very weak positive relationship with the scale of watershed.


2021 ◽  
Author(s):  
Mehmet Niyazi Cankaya ◽  
Roberto Vila

Abstract The maximum logq likelihood estimation method is a generalization of the known maximum log likelihood method to overcome the problem for modeling non-identical observations ( inliers and outliers). The parameter $q$ is a tuning constant to manage the modeling capability. Weibull is a flexible and popular distribution for problems in engineering. In this study, this method is used to estimate the parameters of Weibull distribution when non-identical observations exist. Since the main idea is based on modeling capability of objective function p(x; ʘ) = logq [f(x; ʘ)], we observe that the finiteness of score functions cannot play a role in the robust estimation for inliers . The properties of Weibull distribution are examined. In the numerical experiment, the parameters of Weibull distribution are estimated by logq and its special form, log , likelihood methods if the different designs of contamination into underlying Weibull distribution are applied. The optimization is performed via genetic algorithm. The modeling competence of p(x; ʘ) and insensitiveness to non-identical observations are observed by Monte Carlo simulation. The value of $q$ can be chosen by use of the mean squared error in simulation and the $p$ -value of Kolmogorov - Smirnov test statistic used for evaluation of fitting competence. Thus, we can overcome the problem about determining of the value of $q$ for real data sets.


METRON ◽  
2021 ◽  
Author(s):  
Carlo Cavicchia ◽  
Pasquale Sarnacchiaro

AbstractTeachers’ performances also depend on whether and how they are satisfied with their job. Therefore, Teacher Job Satisfaction must be considered as the driver of teachers’ accomplishments. To plan future policies and improve the overall teaching process, it is crucial to understand which factors mostly contribute to Teacher Job Satisfaction. A Common Assessment Framework and Education questionnaire was administered to 163 Italian public secondary school teachers to collect data, and a second-order factor analysis was used to detect which factors impact on Teacher Job Satisfaction, and to what extent. This model-based approach guarantees to detect factors which respect important properties: unidimensionality and reliability. All the coefficients are estimated according to the maximum likelihood estimation method in order to make inference on the parameters and on the validity of the model. Moreover, a new multi-group test for higher-order factor analysis was proposed and implemented. Finally, we analyzed in detail whether the factors impacting Teacher Job Satisfaction are characterized by gender.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Hisham M. Almongy ◽  
Ehab M. Almetwally ◽  
Randa Alharbi ◽  
Dalia Alnagar ◽  
E. H. Hafez ◽  
...  

This paper is concerned with the estimation of the Weibull generalized exponential distribution (WGED) parameters based on the adaptive Type-II progressive (ATIIP) censored sample. Maximum likelihood estimation (MLE), maximum product spacing (MPS), and Bayesian estimation based on Markov chain Monte Carlo (MCMC) methods have been determined to find the best estimation method. The Monte Carlo simulation is used to compare the three methods of estimation based on the ATIIP-censored sample, and also, we made a bootstrap confidence interval estimation. We will analyze data related to the distribution about single carbon fiber and electrical data as real data cases to show how the schemes work in practice.


Author(s):  
Shuguang Song ◽  
Hanlin Liu ◽  
Mimi Zhang ◽  
Min Xie

In this paper, we propose and study a new bivariate Weibull model, called Bi-levelWeibullModel, which arises when one failure occurs after the other. Under some specific regularity conditions, the reliability function of the second event can be above the reliability function of the first event, and is always above the reliability function of the transformed first event, which is a univariate Weibull random variable. This model is motivated by a common physical feature that arises fromseveral real applications. The two marginal distributions are a Weibull distribution and a generalized three-parameter Weibull mixture distribution. Some useful properties of the model are derived, and we also present the maximum likelihood estimation method. A real example is provided to illustrate the application of the model.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Mohammed Haiek ◽  
Youness El Ansari ◽  
Nabil Ben Said Amrani ◽  
Driss Sarsri

In this paper, we propose a stochastic model to describe over time the evolution of stress in a bolted mechanical structure depending on different thicknesses of a joint elastic piece. First, the studied structure and the experiment numerical simulation are presented. Next, we validate statistically our proposed stochastic model, and we use the maximum likelihood estimation method based on Euler–Maruyama scheme to estimate the parameters of this model. Thereafter, we use the estimated model to compare the stresses, the peak times, and extinction times for different thicknesses of the elastic piece. Some numerical simulations are carried out to illustrate different results.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Yifan Sun ◽  
Xiang Xu

As a widely used inertial device, a MEMS triaxial accelerometer has zero-bias error, nonorthogonal error, and scale-factor error due to technical defects. Raw readings without calibration might seriously affect the accuracy of inertial navigation system. Therefore, it is necessary to conduct calibration processing before using a MEMS triaxial accelerometer. This paper presents a MEMS triaxial accelerometer calibration method based on the maximum likelihood estimation method. The error of the MEMS triaxial accelerometer comes into question, and the optimal estimation function is established. The calibration parameters are obtained by the Newton iteration method, which is more efficient and accurate. Compared with the least square method, which estimates the parameters of the suboptimal estimation function established under the condition of assuming that the mean of the random noise is zero, the parameters calibrated by the maximum likelihood estimation method are more accurate and stable. Moreover, the proposed method has low computation, which is more functional. Simulation and experimental results using the consumer low-cost MEMS triaxial accelerometer are presented to support the abovementioned superiorities of the maximum likelihood estimation method. The proposed method has the potential to be applied to other triaxial inertial sensors.


Sign in / Sign up

Export Citation Format

Share Document