A simulation study of impact of muon beam performance in μSR data analysis

Author(s):  
Ziwen Pan ◽  
Li Deng ◽  
Jingyu Dong ◽  
Zhe Wang ◽  
Zebin Lin ◽  
...  
2020 ◽  
Vol 80 (5) ◽  
pp. 995-1019
Author(s):  
André Beauducel ◽  
Martin Kersting

We investigated by means of a simulation study how well methods for factor rotation can identify a two-facet simple structure. Samples were generated from orthogonal and oblique two-facet population factor models with 4 (2 factors per facet) to 12 factors (6 factors per facet). Samples drawn from orthogonal populations were submitted to factor analysis with subsequent Varimax, Equamax, Parsimax, Factor Parsimony, Tandem I, Tandem II, Infomax, and McCammon’s minimum entropy rotation. Samples drawn from oblique populations were submitted to factor analysis with subsequent Geomin rotation and a Promax-based Tandem II rotation. As a benchmark, we investigated a target rotation of the sample loadings toward the corresponding faceted population loadings. The three conditions were sample size ( n = 400, 1,000), number of factors ( q = 4-12), and main loading size ( l = .40, .50, .60). For less than six orthogonal factors Infomax and McCammon’s minimum entropy rotation and for six and more factors Tandem II rotation yielded the highest congruence of sample loading matrices with faceted population loading matrices. For six and more oblique factors Geomin rotation and a Promax-based Tandem II rotation yielded the highest congruence with faceted population loadings. Analysis of data of 393 participants that performed a test for the Berlin Model of Intelligence Structure revealed that the faceted structure of this model could be identified by means of a Promax-based Tandem II rotation of task aggregates corresponding to the cross-products of the facets. Implications for the identification of faceted models by means of factor rotation are discussed.


2014 ◽  
Vol 9 (2) ◽  
pp. 202-213 ◽  
Author(s):  
Jing Cao

AbstractThere has been ongoing interest in studying wine judges' performance in evaluating wines. Most of the studies have reached a similar conclusion: a significant lack of consensus exists in wine quality ratings. However, a few studies, to the author's knowledge, have provided direct quantification of how much consensus (as opposed to randomness) exists in wine ratings. In this paper, a permutation-based mixed model is proposed to quantify randomness versus consensus in wine ratings. Specifically, wine ratings under the condition of randomness are generated with a permutation method, and wine ratings under the condition of consensus can be produced by sorting the ratings for each judge. Then the observed wine ratings are modeled as a mixture of ratings under randomness and ratings under consensus. This study shows that the model can provide excellent model fit, which indicates that wine ratings, indeed, consist of a mixture of randomness and consensus. A direct measure is easily computed to quantify randomness versus consensus in wine ratings. The method is demonstrated with data analysis from a major wine competition and a simulation study. (JEL Classifications: C10, C13, C15)


2021 ◽  
Author(s):  
Matthew Baldwin ◽  
Hans Alves ◽  
Christian Unkelbach

According to the evaluative information ecology model of social-comparison, people are more similar on their positive traits and tend to differ on their negative traits. This means that comparisons based on differences will naturally produce negative evaluations, whereas those based on similarities will produce positive evaluations. In this research we apply and extend this model to theorize about the outcomes of temporal self-comparisons. We predicted that one’s similarities across time would be evaluated positively, whereas one’s differences would be evaluated more negatively. However, because positive attributes are reinforced over time, we expected an asymmetry to emerge such that attributes unique to the past self (past differences) would be most negative. Evidence from a simulation study, 7 experiments (total \textit{N} = 1844), and an integrative data analysis, support the notion that temporal self-appraisals follow naturally from comparisons in a known information ecology. Several tests of the prevailing motivated-self-perception account did not bear fruit. We discuss the implications of these findings for temporal self-appraisal theory as well as other aspects of self and identity.


SIMULATION ◽  
2012 ◽  
Vol 88 (12) ◽  
pp. 1438-1455
Author(s):  
Ciprian Dobre

The scale, complexity and worldwide geographical spread of the Large Hadron Collider (LHC) computing and data analysis problems are unprecedented in scientific research. The complexity of processing and accessing this data is increased substantially by the size and global span of the major experiments, combined with the limited wide-area network bandwidth available. This paper discusses the latest generation of the MONARC (MOdels of Networked Analysis at Regional Centers) simulation framework, as a design and modeling tool for large-scale distributed systems applied to high-energy physics experiments. We present a simulation study designed to evaluate the capabilities of the current real-world distributed infrastructures deployed to support existing LHC physics analysis processes and the means by which the experiments band together to meet the technical challenges posed by the storage, access and computing requirements of LHC data analysis. The Compact Muon Solenoid (CMS) experiment, in particular, uses a general-purpose detector to investigate a wide range of physics. We present a simulation study designed to evaluate the capability of its underlying distributed processing infrastructure to support the physics analysis processes. The results, made possible by the MONARC model, demonstrate that the LHC infrastructures are well suited to support the data processes envisioned by the CMS computing model.


2007 ◽  
Vol 05 (04) ◽  
pp. 963-975 ◽  
Author(s):  
XING QIU ◽  
ANDREI YAKOVLEV

This commentary is concerned with a formula for the false discovery rate (FDR) which frequently serves as a basis for its estimation. This formula is valid under some quite special conditions, motivating us to further discuss probabilistic models behind the commonly accepted FDR concept with a special focus on problems arising in microarray data analysis. We also present a simulation study designed to assess the effects of inter-gene correlations on some theoretical results based on such models.


Sign in / Sign up

Export Citation Format

Share Document