hypothesis testing procedure
Recently Published Documents


TOTAL DOCUMENTS

26
(FIVE YEARS 6)

H-INDEX

5
(FIVE YEARS 1)

Author(s):  
Mirko Signorelli ◽  
Luisa Cutillo

AbstractCommunity structure is a commonly observed feature of real networks. The term refers to the presence in a network of groups of nodes (communities) that feature high internal connectivity, but are poorly connected between each other. Whereas the issue of community detection has been addressed in several works, the problem of validating a partition of nodes as a good community structure for a real network has received considerably less attention and remains an open issue. We propose a set of indices for community structure validation of network partitions that are based on an hypothesis testing procedure that assesses the distribution of links between and within communities. Using both simulations and real data, we illustrate how the proposed indices can be employed to compare the adequacy of different partitions of nodes as community structures in a given network, to assess whether two networks share the same or similar community structures, and to evaluate the performance of different network clustering algorithms.


Symmetry ◽  
2021 ◽  
Vol 13 (8) ◽  
pp. 1322
Author(s):  
Shu-Fei Wu ◽  
Wei-Tsung Chang

With the high demands on the quality of high-tech products for consumers, assuring the lifetime performance is a very important task for competitive manufacturing industries. The lifetime performance index CL is frequently used to monitor the larger-the-better lifetime performance of products. This research is related to the topic of asymmetrical probability distributions and applications across disciplines. Chen lifetime distribution with a bathtub shape or increasing failure rate function has many applications in the lifetime data analysis. We derived the uniformly minimum variance unbiased estimator (UMVUE) for CL, and we used this estimator to develop a hypothesis testing procedure of CL under a lower specification limit based on the progressive type-II censored sample. The Bayesian estimator for CL is also derived, and it is used to develop another hypothesis testing procedure. A simulation study is conducted to compare the average confidence levels for two procedures. Finally, one practical example is given to illustrate the implementation of our proposed non-Bayesian and Bayesian testing procedure.


2021 ◽  
Vol 66 (3) ◽  
pp. 7-21
Author(s):  
Mirosław Szreder

Increasing numbers of non-random errors are observed in contemporary sample surveying – in particular, those resulting from no response or faulty measutrements (imprecise statistical observation). Until recently, the consequences of these kinds of errors have not been widely discussed in the context of the testing of hypoteses. Researchers focused almost entirely on sampling errors (random errors), whose magnitude decreases as the size of the random sample grows. In consequence, researchers who often use samples of very large sizes tend to overlook the influence random and non-random errors have on the results of their study. The aim of this paper is to present how non-random errors can affect the decision-making process based on the classical hypothesis testing procedure. Particular attention is devoted to cases in which researchers manage samples of large sizes. The study proved the thesis that samples of large sizes cause statistical tests to be more sensitive to non-random errors. Systematic errors, as a special case of non-random errors, increase the probability of making the wrong decision to reject a true hypothesis as the sample size grows. Supplementing the testing of hypotheses with the analysis of confidence intervals may in this context provide substantive support for the researcher in drawing accurate inferences.


Biometrika ◽  
2020 ◽  
Author(s):  
Xinbing Kong

Summary We introduce a random-perturbation-based rank estimator of the number of factors of a large-dimensional approximate factor model. An expansion of the rank estimator demonstrates that the random perturbation reduces the biases due to the persistence of the factor series and the dependence between the factor and error series. A central limit theorem for the rank estimator with convergence rate higher than root $n$ gives a new hypothesis-testing procedure for both one-sided and two-sided alternatives. Simulation studies verify the performance of the test.


MAKILA ◽  
2019 ◽  
Vol 13 (1) ◽  
pp. 14-28
Author(s):  
Sitna Marasabessy ◽  
Bokiraiya Latuamury ◽  
Iskar Iskar ◽  
Christy C.V. Suhendy

Green open space is at least a minimum requirement for an environmentally sustainable city at 30% of the total area. Pressure on green free space, especially the Green belt area in the river border, tends to increase from year to year due to an increase in urban population. Therefore, this study aims to analyze people's perceptions of the green belt vegetation's role in the watershed of the Wae Batu Gajah watershed in Ambon City. The research method uses descriptive methods that describe a situation based on facts in the field and do not treat the object, with the hypothesis testing procedure using Chi-Square. The results showed that the community's socio-economic parameters consisting of age, formal education, and occupation had a significant influence on the understanding of the green border of the river. In contrast, gender and marital status parameters have no significant effect on understanding the green belt border. Formal education can influence attitudes and behavior through values, character, and understanding of a problem built in stages in a person. The type of work a person has for a long time working will affect the environment's mindset and behavior. The poor only have two sources of income, through salaries / informal business surpluses for basic needs.


2019 ◽  
Vol 38 (20) ◽  
pp. 3791-3803 ◽  
Author(s):  
Corentin Segalas ◽  
Hélène Amieva ◽  
Hélène Jacqmin‐Gadda

2018 ◽  
Vol 15 (2) ◽  
pp. 1
Author(s):  
Set Foong Ng ◽  
Yee Ming Chew ◽  
Pei Eng Chng ◽  
Kok Shien Ng

Regression models are developed in various field of applications to help researchers to predict certain variables based on other predictor variables. The dependent variables in the regression model are estimated by a number of independent variables. Model utility test is a hypothesis testing procedure in regression to verify if there is a useful relationship between the dependent variable and the independent variable. The hypothesis testing procedure that involves p-value is commonly used in model utility test. A new technique that involves coefficient of determination R2 in model utility test is developed in this paper. The effectiveness of the model utility test in testing the significance of regression model is evaluated using simple linear regression model with the significance level α = 0.01, 0.025 and 0.05. The study in this paper shows that a regression model that is declared to be a significant model by using model utility test, however it fails to guarantee a strong linear relationship between the independent variable and dependent variable. Based on the evaluation presented in this paper, it is shown that the p-value approach in model utility test is not a good technique in evaluating the significance of a regression model. The results of this study could serve as a reference for other researchers applying regression analysis in their studies. 


2018 ◽  
Vol 15 (2) ◽  
pp. 1
Author(s):  
Set Foong Ng ◽  
Yee Ming Chew ◽  
Pei Eng Chng ◽  
Kok Shien Ng

Regression models are developed in various field of applications to help researchers to predict certain variables based on other predictor variables. The dependent variables in the regression model are estimated by a number of independent variables. Model utility test is a hypothesis testing procedure in regression to verify if there is a useful relationship between the dependent variable and the independent variable. The hypothesis testing procedure that involves p-value is commonly used in model utility test. A new technique that involves coefficient of determination R2 in model utility test is developed in this paper. The effectiveness of the model utility test in testing the significance of regression model is evaluated using simple linear regression model with the significance level α = 0.01, 0.025 and 0.05. The study in this paper shows that a regression model that is declared to be a significant model by using model utility test, however it fails to guarantee a strong linear relationship between the independent variable and dependent variable. Based on the evaluation presented in this paper, it is shown that the p-value approach in model utility test is not a good technique in evaluating the significance of a regression model. The results of this study could serve as a reference for other researchers applying regression analysis in their studies.


2018 ◽  
Vol 31 (9) ◽  
pp. 3411-3422 ◽  
Author(s):  
Philippe Naveau ◽  
Aurélien Ribes ◽  
Francis Zwiers ◽  
Alexis Hannart ◽  
Alexandre Tuel ◽  
...  

Both climate and statistical models play an essential role in the process of demonstrating that the distribution of some atmospheric variable has changed over time and in establishing the most likely causes for the detected change. One statistical difficulty in the research field of detection and attribution resides in defining events that can be easily compared and accurately inferred from reasonable sample sizes. As many impacts studies focus on extreme events, the inference of small probabilities and the computation of their associated uncertainties quickly become challenging. In the particular context of event attribution, the authors address the question of how to compare records between the counterfactual “world as it might have been” without anthropogenic forcings and the factual “world that is.” Records are often the most important events in terms of impact and get much media attention. The authors will show how to efficiently estimate the ratio of two small probabilities of records. The inferential gain is particularly substantial when a simple hypothesis-testing procedure is implemented. The theoretical justification of such a proposed scheme can be found in extreme value theory. To illustrate this study’s approach, classical indicators in event attribution studies, like the risk ratio or the fraction of attributable risk, are modified and tailored to handle records. The authors illustrate the advantages of their method through theoretical results, simulation studies, temperature records in Paris, and outputs from a numerical climate model.


Sign in / Sign up

Export Citation Format

Share Document