null hypothesis testing
Recently Published Documents


TOTAL DOCUMENTS

90
(FIVE YEARS 25)

H-INDEX

13
(FIVE YEARS 2)

2021 ◽  
pp. 50-54
Author(s):  
Vitalii Zozulia

Currently, there is a significant increase in road accidents, which can be associated with an increase in the number of cars, the condition of roads in the country and compliance with road discipline by road users. Modern automotive industry pays great attention to road safety. This is ensured by changes in the design of car interiors, which significantly affects the nature of injuries to victims of road accidents. That is why the solution of this issue has become extremely relevant for forensic expert practice. Aim of the work. To establish the characteristic injuries of the driver and passenger in the cabin of class D cars in a frontal collision. Material and methods. The analysis of road accidents from 2008 to 2021 in Zhytomyr, Rivne, Volyn regions of Ukraine is carried out. Cases of a fatal head-on collision with a driver and passenger in the front seat of a car with an engine capacity of 2-2.5 liters up to 4.7 meters long and 1.81 meters wide were considered. General and special methods used: system-structural analysis, observation, comparison, description. In addition, a forensic examination of the damage was conducted. Statistical analysis included primary data processing by descriptive statistics methods and null hypothesis testing by multifactor analysis. Results. A number of new features have been identified, which are inherent in the damage received in the cabin of a class D car, which can help to identify during the forensic examination. In particular, you should pay attention to the presence of neck injuries. According to the literature, injuries of the cervical spine are among the key and include: rupture of the atlanto-occipital ligament, dislocation and subluxation of the atlanto-axial joint, damage to the spinal cord and its membranes. Conclusions: The location and morphological features of spinal injuries can be used as diagnostic criteria in determining the location of the victim in the cabin of a mobile car class D in a frontal collision. The most informative is the analysis of injuries of the cervical spine of the driver and front passenger.


2021 ◽  
pp. 105-108
Author(s):  
Timothy E. Essington

“Why Fit Models to Data?” is a brief introductory chapter that sets the stage for the forthcoming chapters and serves as a link between Part 1, which contains the introductory chapters on mathematical ecology, and Part 2, which contains the statistical analyses of the models presented in Part 1: Part 1 illustrated how models can be used to make inferences about the real world, to help clarify ecological understanding, aid decision-making, and evaluate risk. However, if models are presented as hypotheses written in mathematical form, then it becomes possible to use statistical methods to determine which hypotheses have the greatest support. Part 2 will focus on developing statistical tools so that the reader will be able to express hypotheses as mathematical models, fit the models to data, and assess the degree of support for each. The chapter also illustrates the limitations of null-hypothesis testing in decision-making in high-dimensional, multicausal systems.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e12090
Author(s):  
Leonardo Braga Castilho ◽  
Paulo Inácio Prado

Although null hypothesis testing (NHT) is the primary method for analyzing data in many natural sciences, it has been increasingly criticized. Recently, approaches based on information theory (IT) have become popular and were held by many to be superior because it enables researchers to properly assess the strength of the evidence that data provide for competing hypotheses. Many studies have compared IT and NHT in the context of model selection and stepwise regression, but a systematic comparison of the most basic uses of statistics by ecologists is still lacking. We used computer simulations to compare how both approaches perform in four basic test designs (t-test, ANOVA, correlation tests, and multiple linear regression). Performance was measured by the proportion of simulated samples for which each method provided the correct conclusion (power), the proportion of detected effects with a wrong sign (S-error), and the mean ratio of the estimated effect to the true effect (M-error). We also checked if the p-value from significance tests correlated to a measure of strength of evidence, the Akaike weight. In general both methods performed equally well. The concordance is explained by the monotonic relationship between p-values and evidence weights in simple designs, which agree with analytic results. Our results show that researchers can agree on the conclusions drawn from a data set even when they are using different statistical approaches. By focusing on the practical consequences of inferences, such a pragmatic view of statistics can promote insightful dialogue among researchers on how to find a common ground from different pieces of evidence. A less dogmatic view of statistical inference can also help to broaden the debate about the role of statistics in science to the entire path that leads from a research hypothesis to a statistical hypothesis.


2021 ◽  
Author(s):  
Todd E. Hudson

This textbook bypasses the need for advanced mathematics by providing in-text computer code, allowing students to explore Bayesian data analysis without the calculus background normally considered a prerequisite for this material. Now, students can use the best methods without needing advanced mathematical techniques. This approach goes beyond “frequentist” concepts of p-values and null hypothesis testing, using the full power of modern probability theory to solve real-world problems. The book offers a fully self-contained course, which demonstrates analysis techniques throughout with worked examples crafted specifically for students in the behavioral and neural sciences. The book presents two general algorithms that help students solve the measurement and model selection (also called “hypothesis testing”) problems most frequently encountered in real-world applications.


2021 ◽  
Vol 52 (3) ◽  
pp. 173-184
Author(s):  
Anne Böckler ◽  
Annika Rennert ◽  
Tim Raettig

Abstract. Social exclusion, even from minimal game-based interactions, induces negative consequences. We investigated whether the nature of the relationship with the excluder modulates the effects of ostracism. Participants played a virtual ball-tossing game with a stranger and a friend (friend condition) or a stranger and their romantic partner (partner condition) while being fully included, fully excluded, excluded only by the stranger, or excluded only by their close other. Replicating previous findings, full exclusion impaired participants’ basic-need satisfaction and relationship evaluation most severely. While the degree of exclusion mattered, the relationship to the excluder did not: Classic null hypothesis testing and Bayesian statistics showed no modulation of ostracism effects depending on whether participants were excluded by a stranger, a friend, or their partner.


2020 ◽  
Vol 74 (11) ◽  
Author(s):  
Matilda Q. R. Pembury Smith ◽  
Graeme D. Ruxton

Abstract It is not uncommon for researchers to want to interrogate paired binomial data. For example, researchers may want to compare an organism’s response (positive or negative) to two different stimuli. If they apply both stimuli to a sample of individuals, it would be natural to present the data in a 2 × 2 table. There would be two cells with concordant results (the frequency of individuals which responded positively or negatively to both stimuli) and two cells with discordant results (the frequency of individuals who responded positively to one stimulus, but negatively to the other). The key issue is whether the totals in the two discordant cells are sufficiently different to suggest that the stimuli trigger different reactions. In terms of the null hypothesis testing paradigm, this would translate as a P value which is the probability of seeing the observed difference in these two values or a more extreme difference if the two stimuli produced an identical reaction. The statistical test designed to provide this P value is the McNemar test. Here, we seek to promote greater and better use of the McNemar test. To achieve this, we fully describe a range of circumstances within biological research where it can be effectively applied, describe the different variants of the test that exist, explain how these variants can be accessed in R, and offer guidance on which of these variants to adopt. To support our arguments, we highlight key recent methodological advances and compare these with a novel survey of current usage of the test. Significance statement When analysing paired binomial data, researchers appear to reflexively apply a chi-squared test, with the McNemar test being largely overlooked, despite it often being more appropriate. As these tests evaluate a different null hypothesis, selecting the appropriate test is essential for effective analysis. When using the McNemar test, there are four methods that can be applied. Recent advice has outlined clear guidelines on which method should be used. By conducting a survey, we provide support for these guidelines, but identify that the method chosen in publications is rarely specified or the most appropriate. Our study provides clear guidance on which method researchers should select and highlights examples of when this test should be used and how it can be implemented easily to improve future research.


AI ◽  
2020 ◽  
Vol 1 (3) ◽  
pp. 376-388
Author(s):  
Joel R. Bock ◽  
Akhilesh Maewal

Product recommendation can be considered as a problem in data fusion—estimation of the joint distribution between individuals, their behaviors, and goods or services of interest. This work proposes a conditional, coupled generative adversarial network (RecommenderGAN) that learns to produce samples from a joint distribution between (view, buy) behaviors found in extremely sparse implicit feedback training data. User interaction is represented by two matrices having binary-valued elements. In each matrix, nonzero values indicate whether a user viewed or bought a specific item in a given product category, respectively. By encoding actions in this manner, the model is able to represent entire, large scale product catalogs. Conversion rate statistics computed on trained GAN output samples ranged from 1.323% to 1.763%. These statistics are found to be significant in comparison to null hypothesis testing results. The results are shown comparable to published conversion rates aggregated across many industries and product types. Our results are preliminary, however they suggest that the recommendations produced by the model may provide utility for consumers and digital retailers.


Sign in / Sign up

Export Citation Format

Share Document