statistical hypothesis test
Recently Published Documents


TOTAL DOCUMENTS

64
(FIVE YEARS 19)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Christian Zeman ◽  
Christoph Schär

Abstract. Since their first operational application in the 1950s, atmospheric numerical models have become essential tools in weather and climate prediction. As such, they are subject to continuous changes, thanks to advances in computer systems, numerical methods, more and better observations, and the ever increasing knowledge about the atmosphere of Earth. Many of the changes in today’s models relate to seemingly unsuspicious modifications, associated with minor code rearrangements, changes in hardware infrastructure, or software updates. Such changes are not supposed to significantly affect the model. However, this is difficult to verify, because our atmosphere is a chaotic system, where even a tiny change can have a big impact on individual simulations. Overall this represents a serious challenge to a consistent model development and maintenance framework. Here we propose a new methodology for quantifying and verifying the impacts of minor atmospheric model changes, or its underlying hardware/software system, by using a set of simulations with slightly different initial conditions in combination with a statistical hypothesis test. The methodology can assess effects of model changes on almost any output variable over time, and can also be used with different underlying statistical hypothesis tests. We present first applications of the methodology with a regional weather and climate model, including the verification of a major system update of the underlying supercomputer. While providing very robust results, the methodology shows a great sensitivity even to tiny changes. Results show that changes are often only detectable during the first hours, which suggests that short-term simulations (days to months) are best suited for the methodology, even when addressing long-term climate simulations. We also show that the choice of the underlying statistical hypothesis test is not of importance and that the methodology already works well for coarse resolutions, making it computationally inexpensive and therefore an ideal candidate for automated testing.


Author(s):  
Ganut Muharromi ◽  
Slamet Eko Budi Santoso ◽  
Suryo Budi Santoso ◽  
Bima Cinintya Pratama

Tujuan penelitian ini adalah untuk mengetahui pengaruh kebijakan hutang, arus kas bebas, likuiditas dan pertumbuhan penjualan terhadap kinerja keuangan pada perusahaan pertambangan yang terdaftar di BEI pada periode 2016-2019. Variabel independen dalam penelitian ini adalah kebijakan hutang, arus kas bebas, likuiditas dan pertumbuhan penjualan sedangkan variabel dependen adalah kinerja keuangan. Populasi dalam penelitian ini adalah perusahaan pertambangan di BEI pada periode 2016-2019. Teknik pengumpulan data yang digunakan adalah purposive sampling, sedangkan sampel yang diperoleh sebanyak 72 yang sesuai kriteria. Teknik analisis data yang digunakan dalam penelitian ini yaitu statistik deskriptif, uji asumsi klasik, uji analisis regresi berganda dan uji hipotesis statistik. Berdasarkan hasil menunjukkan bahwa variabel arus kas bebas berpengaruh positif terhadap kinerja keuangan, sedangkan veriabel kebijakan hutang, likuiditas dan pertumbuhan penjualan tidak berpengaruh terhadap kinerja keuangan.  The purpose of this study was to determine the effect of independent the effect of debt policy, free cash flow, liquidity, and sales growth on financial perfomance in mining companies listed on the IDX in the 2016-2019 period. The independent variable in this study is the debt policy, free cash flow, liquidity, and sales growth, while the dependent variable is financial perfomance. The population in this study were mining companies on the IDX in the 2016-2019 period. The data collection technique used was purposive sampling, while the samples obtained were 72 that fit the criteria. The data analysis technique used in this research is descriptive statistics, classical assumption test, multiple regression analysis test, statistical hypothesis test. Based on the results show that the independent free cash flow has a positive effect on financial perfomance and than variabel debt policy, liquidity and sales growth have no effect on financial perfomance.


2021 ◽  
Vol 40 (5) ◽  
pp. 8665-8681
Author(s):  
Radhia Fezai ◽  
Majdi Mansouri ◽  
Kamaleldin Abodayeh ◽  
Hazem Nounou ◽  
Mohamed Nounou ◽  
...  

This paper aims at improving the operation of the water distribution networks (WDN) by developing a leak monitoring framework. To do that, an online statistical hypothesis test based on leak detection is proposed. The developed technique, the so-called exponentially weighted online reduced kernel generalized likelihood ratio test (EW-ORKGLRT), is addressed so that the modeling phase is performed using the reduced kernel principal component analysis (KPCA) model, which is capable of dealing with the higher computational cost. Then the computed model is fed to EW-ORKGLRT chart for leak detection purposes. The proposed approach extends the ORKGLRT method to the one that uses exponential weights for the residuals in the moving window. It might be able to further enhance leak detection performance by detecting small and moderate leaks. The developed method’s main advantages are first dealing with the higher required computational time for detecting leaks and then updating the KPCA model according to the dynamic change of the process. The developed method’s performance is evaluated and compared to the conventional techniques using simulated WDN data. The selected performance criteria are the excellent detection rate, false alarm rate, and CPU time.


2021 ◽  
Vol 11 (3) ◽  
pp. 697-702
Author(s):  
S. Jayanthi ◽  
C. R. Rene Robin

In this study, DNA microarray data is analyzed from a signal processing perspective for cancer classification. An adaptive wavelet transform named Empirical Wavelet Transform (EWT) is analyzed using block-by-block procedure to characterize microarray data. The EWT wavelet basis depends on the input data rather predetermined like in conventional wavelets. Thus, EWT gives more sparse representations than wavelets. The characterization of microarray data is made by block-by-block procedure with predefined block sizes in powers of 2 that starts from 128 to 2048. After characterization, a statistical hypothesis test is employed to select the informative EWT coefficients. Only the selected coefficients are used for Microarray Data Classification (MDC) by the Support Vector Machine (SVM). Computational experiments are employed on five microarray datasets; colon, breast, leukemia, CNS and ovarian to test the developed cancer classification system. The obtained results demonstrate that EWT coefficients with SVM emerged as an effective approach with no misclassification for MDC system.


2021 ◽  
Author(s):  
Abhijit Mahesh Chinchani ◽  
Mahesh Menon ◽  
Meighen Roes ◽  
Heungsun Hwang ◽  
Paul Allen ◽  
...  

Cognitive mechanisms hypothesized to underlie hallucinatory experiences (HEs) include dysfunctional source monitoring, heightened signal detection, or impaired attentional processes. HEs can be very pronounced in psychosis, but similar experiences also occur in nonclinical populations. Using data from an international multisite study on nonclinical subjects (N = 419), we described the overlap between two sets of variables - one measuring cognition and the other HEs - at the level of individual items, allowing extraction of item-specific signal which might considered off-limits when summary scores are analyzed. This involved using a statistical hypothesis test at the multivariate level, and variance constraints, dimension reduction, and split-half reliability checks at the level of individual items. The results showed that (1) modality-general HEs involving sensory distortions (hearing voices/sounds, troubled by voices, everyday things look abnormal, sensations of presence/movement) were associated with more liberal auditory signal detection, and (2) HEs involving experiences of sensory overload and vivid images/imagery (viz., HEs for faces and intense daydreams) were associated with other-ear distraction and reduced laterality in dichotic listening. Based on these results, it is concluded that the overlap between HEs and cognition variables can be conceptualized as modality-general and bi-dimensional: one involving distortions, and the other involving overload or intensity.


2020 ◽  
Vol 4 (3) ◽  
pp. 293
Author(s):  
Lina Novita ◽  
Entis Sutisna ◽  
Khansa Rohadatul’Aisy Rabbani

The background of this research was a lack of learning outcomes of human and environmental sub-themes. The purpose of this research was to increase the effectiveness of learning outcomes using animation-learning media (experimental class) and without media (control class) on the human and environmental sub-theme in fifth grade. The subjects of this research were 5th grade, totaling 74 students. The analysis technique used the analysis prerequisite test that included the normality test, homogeneity test, and hypothesis test with the t-test. The results of this research showed that there was a significant difference in the mean N-gain values of the learning outcomes of the experimental class and the control class. The average score of N-gain in the experimental class was 74 with a mastery of learning outcomes 97, 29%, while the average score of N-gain in the control class was 64 with a mastery of learning outcomes 81, 08%. The results of the calculation statistical hypothesis test that showed t value (3, 25) was higher than t table (2, 000) with dk 72, and the significant formula α = 0, 05, from these results showed that Ho is rejected and Ha was accepted. Based on the results of this research, it can be concluded that there was an influence of the use of animated learning media on the learning outcomes of the sub-theme of humans and the environment in fifth grade at elementary school.


Computation ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 59 ◽  
Author(s):  
Giovanni Delnevo ◽  
Silvia Mirri ◽  
Marco Roccetti

As we prepare to emerge from an extensive and unprecedented lockdown period, due to the COVID-19 virus infection that hit the Northern regions of Italy with the Europe’s highest death toll, it becomes clear that what has gone wrong rests upon a combination of demographic, healthcare, political, business, organizational, and climatic factors that are out of our scientific scope. Nonetheless, looking at this problem from a patient’s perspective, it is indisputable that risk factors, considered as associated with the development of the virus disease, include older age, history of smoking, hypertension and heart disease. While several studies have already shown that many of these diseases can also be favored by a protracted exposure to air pollution, there has been recently an insurgence of negative commentary against authors who have correlated the fatal consequences of COVID-19 (also) to the exposition of specific air pollutants. Well aware that understanding the real connection between the spread of this fatal virus and air pollutants would require many other investigations at a level appropriate to the scale of this phenomenon (e.g., biological, chemical, and physical), we propose the results of a study, where a series of the measures of the daily values of PM2.5, PM10, and NO2 were considered over time, while the Granger causality statistical hypothesis test was used for determining the presence of a possible correlation with the series of the new daily COVID19 infections, in the period February–April 2020, in Emilia-Romagna. Results taken both before and after the governmental lockdown decisions show a clear correlation, although strictly seen from a Granger causality perspective. Moving beyond the relevance of our results towards the real extent of such a correlation, our scientific efforts aim at reinvigorating the debate on a relevant case, that should not remain unsolved or no longer investigated.


2020 ◽  
Author(s):  
Tommy Chan ◽  
Runlong Cai ◽  
Lauri R. Ahonen ◽  
Yiliang Liu ◽  
Ying Zhou ◽  
...  

Abstract. Determining the particle size distribution of atmospheric aerosol particles is an important component to understand nucleation, formation and growth. This is particularly crucial at the sub 3 nm range because of the growth of newly-formed nanoparticles. The challenge in recovering the size distribution is due its complexity and the fact that not many instruments currently measure at this size range. In this study, we used the particle size magnifier (PSM) to measure atmospheric aerosols. Each event was classified into one of the three types: new particle formation (NPF), non-event and haze events. We then compared four inversion methods (step-wise, kernel, Hagen and Alofs and expectation-maximization) to determine its feasibility to recover the particle size distribution. In addition, we proposed a method to pre-treat measured data and introduced a simple test to estimate the efficacy of the inversion itself. Results showed that all four methods inverted NPF events well; but the step-wise and kernel methods fared poorly when inverting non-event and haze events. This was due to their algorithm, such that when encountering noisy data (e.g. air mass fluctuations) and under the influence of larger particles, these methods overestimated the size distribution and reported artificial particles during inversion. Therefore, using a statistical hypothesis test to discard noisy scans prior to inversion is an important first step to achieve a good size distribution. As a first step after inversion, it is ideal to compare the integrated concentration to the raw estimate (i.e., the concentration difference at the lowest supersaturation and the highest supersaturation) to ascertain whether the inversion itself is sound. Finally, based on the analysis of the inversion methods, we provide recommendations and codes related to the PSM data inversion.


Sign in / Sign up

Export Citation Format

Share Document