statistical hypotheses
Recently Published Documents


TOTAL DOCUMENTS

359
(FIVE YEARS 81)

H-INDEX

30
(FIVE YEARS 5)

2022 ◽  
Vol 12 (1) ◽  
pp. 204
Author(s):  
Ronald Héctor Révolo Acevedo ◽  
Bimael Justo Quispe Reymundo ◽  
Cirilo Walter Huamán Huamán ◽  
Julio Cesar Álvarez Orellana ◽  
Emilio Osorio Berrocal ◽  
...  

The attitude is based on the self-concept or degree to which an individual perceives an integral part of the natural environment and situations with which it is related. Knowledge includes obtaining, analyzing and systematizing an individual from their natural environment, which is an important step for personal understanding and development. The objective of the research was to analyze and relate the environmental knowledge and attitude of 382 people towards urban-sustainable eco-efficiency in the inhabitants of the Chilca district. Two questionnaires [knowledge and attitude] of 23 questions were designed, the interview was personal to 382 people between the ages of 20 and 50 years. The questionnaire presented 5 responses using the Likert scale, the relationship was corroborated by hiring Rho Spearman and t-student statistical hypotheses. Result: The Environmental Knowledge towards Eco-efficiency (267, 290 and 225 has an idea totally in agreement in air and soil, biological diversity and climate change, respectively). The Environmental Attitude towards Eco-efficiency (200, 192 and 191 have an idea that is totally in agreement in their cognitive, affective and conative aspects, respectively). The relationship between K. and A. presents a rho=0.47 being a good relationship, with con t-student=10.35 and α=2.2e-16; affirming that there is a relationship between the environmental attitude and environmental knowledge towards eco-efficiency. The inhabitants of Chilca affirm that knowledge and attitude are important and agree to mitigate, reduce and conserve biodiversity, air, soil and water, climate change from a cognitive, affective and conative perspective, forming an eco-efficient polymathic and environmental psychology for sustainable urban.   Received: 3 October 2021 / Accepted: 26 November 2021 / Published: 3 January 2022


2021 ◽  
Vol 15 (2) ◽  
pp. 1-12
Author(s):  
Jaroslav Jr. Belas ◽  
Katarina Zvarikova ◽  
Josef Marousek ◽  
Zdenko Metzker

Research background: The issue of personnel risk management has received less attention from scholars than other types of managerial risks. Employees represent important capital for an enterprise, which can significantly influence its performance and success Purpose of the article: The aim of the study is to present and quantify significant factors of personnel risk in the SME sector. Part of the goal is to compare entrepreneurs’ approaches to these factors based on company size and the entrepreneurs’ education and age. Methods: The empirical research was conducted on a sample of 250 respondents from Slovakia via an online questionnaire. The statistical hypotheses were tested using descriptive statistics (percentages) and Pearson’s statistics (chi-square and Z-score). Findings & Value added: The research confirmed that personnel risk posed a significant business risk for SMEs, as up to 32 % of all the respondents rated this risk as unacceptable. Neither employee turnover nor employees’ error rate level represented a significant problem for SMEs at the time of the study. Only a small proportion of the respondents agreed with the opinion that their employees attempted to improve their performance, and that competition prevailed among them. The research demonstrated that some differences in entrepreneurs’ overall attitudes related to their age and education. Additionally, differences were identified in the entrepreneurs’ positive attitudes towards individual claims based on their education and age. Meanwhile, the results show that the issue of personnel management in the effective management of personnel risks in the SME environment could be an exciting issue for scientific research.


2021 ◽  
Vol 25 (5-6) ◽  
pp. 12-15
Author(s):  
Н.Р. Кербаж ◽  
С.І. Панасенко

Introduction. Acute pancreatitis (AP) is one of the most common diseases of the digestive system that require hospitalization. To date, the problem of stratification and differential diagnosis of AP in the early stages remains unresolved, which encourages the search for new methods of diagnosis and prediction of the severity of AP. Aim. To evaluate the possibility of creating a clinically oriented system of stratification and prognosis of AP on the basis of dynamic changes in microcirculation depending on the duration of the disease and severity of AP. Materials and methods. Assessment of the state of microcirculation (MC) of patients by laser Doppler flowmetry (LDF) was performed with the “LAKK-02” device. Kruskal-Wallis non-parametric analysis of variance and the median test were used to test statistical hypotheses when comparing independent samples. Pairwise comparison of independent samples was performed using the Mann-Whitney U test. Results. The study determined the indicators of MC in patients with different AP severity degrees on the first day of the disease. The microcirculation parameter (MP) in patients with mild, moderate, and severe AP was 3.9; 3.8 and 6.8 perfusion units (p.u.), respectively. The blood flow modulation rate (ơ) was 0.52; 0.54 and 0.69 p.u. in mild, moderate, and severe AP. In our study, the coefficient of variation (Kv) averaged 17.3%; 20.0% and 11.7% in patients with mild, moderate, and severe AP, respectively. Conclusions. LDF in AP is an informative method of diagnosing the state of MC, which is a universal link in all pathophysiological reactions of the organism. Changes of MC in AP depend on the severity of AP and the period of the disease. The pathophysiological microcirculatory phenomena, revealed on the first day of the disease, provide us with the perspectives of early clinical distinguishing the moderate and severe forms of AP from the so-called group of “destructive forms”.


2021 ◽  
Vol 27 (12) ◽  
pp. 2719-2745
Author(s):  
Mikhail V. POMAZANOV

Subject. This article deals with the issues of validation of the consistency of rating-based model forecasts. Objectives. The article aims to provide developers and validators of rating-based models with a practical fundamental test for benchmarking study of the estimated default probability values obtained as a result of the application of models used in the rating system. Methods. For the study, I used the classical interval approach to testing of statistical hypotheses focused on the subject area of calibration of rating systems. Results. In addition to the generally accepted tests for the correspondence of the predicted probabilities of default of credit risk objects to the historically realized values, the article proposes a new statistical test that corrects the shortcomings of the generally accepted ones, focused on "diagnosing" the consistency of the implemented discrimination of objects by the rating model. Examples of recognizing the reasons for a negative test result and negative consequences for lending are given while maintaining the current settings of the rating model. In addition to the bias in the assessment of the total frequency of defaults in the loan portfolio, the proposed method makes it possible to objectively reveal the inadequacy of discrimination against borrowers with a calibrated rating model, diagnose the “disease” of the rating model. Conclusions and Relevance. The new practical benchmark test allows to reject the hypothesis about the consistency of assessing the probability of default by the rating model at a given level of confidence and available historical data. The test has the advantage of practical interpretability based on its results, it is possible to draw a conclusion about the direction of the model correction. The offered test can be used in the process of internal validation by the bank of its own rating models, which is required by the Bank of Russia for approaches based on internal ratings.


Author(s):  
Daniel Berner ◽  
Valentin Amrhein

A paradigm shift away from null hypothesis significance testing seems in progress. Based on simulations, we illustrate some of the underlying motivations. First, P-values vary strongly from study to study, hence dichotomous inference using significance thresholds is usually unjustified. Second, statistically significant results have overestimated effect sizes, a bias declining with increasing statistical power. Third, statistically non-significant results have underestimated effect sizes, and this bias gets stronger with higher statistical power. Fourth, the tested statistical hypotheses generally lack biological justification and are often uninformative. Despite these problems, a screen of 48 papers from the 2020 volume of the Journal of Evolutionary Biology exemplifies that significance testing is still used almost universally in evolutionary biology. All screened studies tested the default null hypothesis of zero effect with the default significance threshold of p = 0.05, none presented a pre-planned alternative hypothesis, and none calculated statistical power and the probability of ‘false negatives’ (beta error). The papers reported 49 significance tests on average. Of 41 papers that contained verbal descriptions of a ‘statistically non-significant’ result, 26 (63%) falsely claimed the absence of an effect. We conclude that our studies in ecology and evolutionary biology are mostly exploratory and descriptive. We should thus shift from claiming to “test” specific hypotheses statistically to describing and discussing many hypotheses (effect sizes) that are most compatible with our data, given our statistical model. We already have the means for doing so, because we routinely present compatibility (“confidence”) intervals covering these hypotheses.


2021 ◽  
Vol 102 (6) ◽  
pp. 843-854
Author(s):  
N V Ivanova ◽  
V S Belov ◽  
A I Samarkin ◽  
Z N Tretyakevich ◽  
V M Mikushev ◽  
...  

Aim. To analyze COVID-19 comorbidities and their impact on disease course and the risk for unfavorable outcomes. Methods. This study examined a group of 110 patients aged 32 to 97 who were admitted to the intensive care unit of the Pskov Regional Infectious Diseases Hospital in the period from October 7, 2020 to March 23, 2021. The mean age of patients was 65 years, 51% (56 people) were male. The study recorded age, comorbidities on a binary scale (yes no), course of the disease, the degree of lung injury, hospital length of stay, treatment outcome. The impact of comorbidities on the disease severity and outcomes was assessed by using logistic regression analysis. Results. It was shown that a regional sample of patients showed an increased hospital mortality rate compared with the data of the ACTIV registry (33.5 versus 7.6%). Chronic respiratory diseases in patients with COVID-19 regional cohorts affected the fatal outcome 2.7 times less than those registered in the Russian register. The presence of endocrine and thrombotic circulatory system diseases was generally close to the register. Concomitant cardiovascular diseases in patients of the regional cohort affected the mortality of COVID-19 outcomes two times less (in patients of the region, the risk of mortality increased by 2.066 times) than in the registry. The reliability of the conclusions is confirmed by testing statistical hypotheses and reliability coefficients below 5%. Conclusion. The study shows the statistically significant effect of comorbidities on the COVID-19 outcomes; the specificity of the results related to the sampling characteristics and the regional component.


2021 ◽  
Vol 9 (4) ◽  
pp. 65
Author(s):  
Daniela Rybárová ◽  
Helena Majdúchová ◽  
Peter Štetka ◽  
Darina Luščíková

The aim of this paper is to assess the reliability of alternative default prediction models in local conditions, with subsequent comparison with other generally known and globally disseminated default prediction models, such as Altman’s Z-score, Quick Test, Creditworthiness Index, and Taffler’s Model. The comparison was carried out on a sample of 90 companies operating in the Slovak Republic over a period of 3 years (2016, 2017, and 2018) with a narrower focus on three sectors: construction, retail, and tourism, using alternative default prediction models, such as CH-index, G-index, Binkert’s Model, HGN2 Model, M-model, Gulka’s Model, Hurtošová’s Model, Model of Delina and Packová, and Binkert’s Model. To verify the reliability of these models, tests of the significance of statistical hypotheses were used, such as type I and type II error. According to research results, the highest reliability and accuracy was achieved by an alternative local Model of Delina and Packová. The least reliable results within the list of models were reported by the most globally disseminated model, Altman’s Z-score. Significant differences between sectors were identified.


2021 ◽  
Vol 9 (1) ◽  
pp. 127-155
Author(s):  
Helena Chládková ◽  
Renata Skýpalová ◽  
Veronika Blašková

The number of students at Czech universities had been growing continuously until 2010. In 2010, almost 400,000 students studied there. Since then, this number has declined every year. Pressure on present-day universities has been accruing due to the competitive environment. The only way to strengthen competitiveness is to improve constantly the quality and image. The objective of this paper is to verify what factors are important for students regarding their satisfaction and what factors could be key for supporting the competitiveness of the Czech universities. To assess student satisfaction, the authors conducted a questionnaire survey where students were asked to identify the strengths and weaknesses of the faculty. The survey was carried out within the Faculty of Business and Economics of Mendel University in Brno (FBE MENDELU) and a selected private university in Brno in 2019. Relative frequencies were used in data processing and statistical hypotheses were tested. In addition to the basic classification according to one feature, a combination classification was also processed, and the independence was tested for different combinations of questions. Of the total number (1,020) of identified strengths at FBE MENDELU, 57.7% of students stated, “quality teachers”, 32.4% “faculty image” and 31.8% “modern environment” as strengths. Regarding the identified weaknesses, the most frequently mentioned were “study difficulty (42.4%),” weaker image of the university with the public “(31.5%) and not enough practical training (23.2%). At the private college, 47.8% of respondents cited “quality teachers”, “interesting lectures and teaching methods” (40.8%) and “study materials for subjects” (29.4%) as the school’s strengths. Received: 16 April 2021Accepted: 23 October 2021


2021 ◽  
Author(s):  
◽  
Thuong Nguyen

<p>For a long time, the goodness of fit (GOF) tests have been one of the main objects of the theory of testing of statistical hypotheses. These tests possess two essential properties. Firstly, the asymptotic distribution of GOF test statistics under the null hypothesis is free from the underlying distribution within the hypothetical family. Secondly, they are of omnibus nature, which means that they are sensitive to every alternative to the null hypothesis.   GOF tests are typically based on non-linear functionals from the empirical process. The first idea to change the focus from particular functionals to the transformation of the empirical process itself into another process, which will be asymptotically distribution free, was first formulated and accomplished by {\bf Khmaladze} \cite{Estate1}. Recently, the same author in consecutive papers \cite{Estate} and \cite{Estate2} introduced another method, called here the {\bf Khmaladze-2} transformation, which is distinct from the first Khmaladze transformation and can be used for an even wider class of hypothesis testing problems and is simpler in implementation.   This thesis shows how the approach could be used to create the asymptotically distribution free empirical process in two well-known testing problems.   The first problem is the problem of testing independence of two discrete random variables/vectors in a contingency table context. Although this problem has a long history, the use of GOF tests for it has been restricted to only one possible choice -- the chi-square test and its several modifications. We start our approach by viewing the problem as one of parametric hypothesis testing and suggest looking at the marginal distributions as parameters. The crucial difficulty is that when the dimension of the table is large, the dimension of the vector of parameters is large as well. Nevertheless, we demonstrate the efficiency of our approach and confirm by simulations the distribution free property of the new empirical process and the GOF tests based on it. The number of parameters is as big as $30$. As an additional benefit, we point out some cases when the GOF tests based on the new process are more powerful than the traditional chi-square one.   The second problem is testing whether a distribution has a regularly varying tail. This problem is inspired mainly by the fact that regularly varying tail distributions play an essential role in characterization of the domain of attraction of extreme value distributions. While there are numerous studies on estimating the exponent of regular variation of the tail, using GOF tests for testing relevant distributions has appeared in few papers. We contribute to this latter aspect a construction of a class of GOF tests for testing regularly varying tail distributions.</p>


2021 ◽  
Author(s):  
◽  
Thuong Nguyen

<p>For a long time, the goodness of fit (GOF) tests have been one of the main objects of the theory of testing of statistical hypotheses. These tests possess two essential properties. Firstly, the asymptotic distribution of GOF test statistics under the null hypothesis is free from the underlying distribution within the hypothetical family. Secondly, they are of omnibus nature, which means that they are sensitive to every alternative to the null hypothesis.   GOF tests are typically based on non-linear functionals from the empirical process. The first idea to change the focus from particular functionals to the transformation of the empirical process itself into another process, which will be asymptotically distribution free, was first formulated and accomplished by {\bf Khmaladze} \cite{Estate1}. Recently, the same author in consecutive papers \cite{Estate} and \cite{Estate2} introduced another method, called here the {\bf Khmaladze-2} transformation, which is distinct from the first Khmaladze transformation and can be used for an even wider class of hypothesis testing problems and is simpler in implementation.   This thesis shows how the approach could be used to create the asymptotically distribution free empirical process in two well-known testing problems.   The first problem is the problem of testing independence of two discrete random variables/vectors in a contingency table context. Although this problem has a long history, the use of GOF tests for it has been restricted to only one possible choice -- the chi-square test and its several modifications. We start our approach by viewing the problem as one of parametric hypothesis testing and suggest looking at the marginal distributions as parameters. The crucial difficulty is that when the dimension of the table is large, the dimension of the vector of parameters is large as well. Nevertheless, we demonstrate the efficiency of our approach and confirm by simulations the distribution free property of the new empirical process and the GOF tests based on it. The number of parameters is as big as $30$. As an additional benefit, we point out some cases when the GOF tests based on the new process are more powerful than the traditional chi-square one.   The second problem is testing whether a distribution has a regularly varying tail. This problem is inspired mainly by the fact that regularly varying tail distributions play an essential role in characterization of the domain of attraction of extreme value distributions. While there are numerous studies on estimating the exponent of regular variation of the tail, using GOF tests for testing relevant distributions has appeared in few papers. We contribute to this latter aspect a construction of a class of GOF tests for testing regularly varying tail distributions.</p>


Sign in / Sign up

Export Citation Format

Share Document