type ii errors
Recently Published Documents


TOTAL DOCUMENTS

229
(FIVE YEARS 52)

H-INDEX

26
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Caspar J. Van Lissa ◽  
Sara van Erp

When analyzing a heterogeneous body of literature, there may be many potentially relevant between-studies differences. These differences can be coded as moderators, and accounted for using meta-regression. However, many applied meta-analyses lack the power to adequately account for multiple moderators, as the number of studies on any given topic is often low. The present study introduces Bayesian Regularized Meta-Analysis (BRMA), an exploratory algorithm that can select relevant moderators from a larger number of candidates. This approach is suitable when heterogeneity is suspected, but it is not known which moderators most strongly influence the observed effect size. We present a simulation study to validate the performance of BRMA relative to state-of-the-art meta-regression (RMA). Results indicated that BRMA compared favorably to RMA on three metrics: predictive performance, which is a measure of the generalizability of results, the ability to reject irrelevant moderators, and the ability to recover population parameters with low bias. BRMA had slightly lower ability to detect true effects of relevant moderators, but the overall proportion of Type I and Type II errors was equivalent to RMA. Furthermore, BRMA regression coefficients were slightly biased towards zero (by design), but its estimates of residual heterogeneity were unbiased. BRMA performed well with as few as 20 studies in the training data, suggesting its suitability as a small sample solution. We discuss how applied researchers can use BRMA to explorate between-studies heterogeneity in meta-analysis.


Author(s):  
Andrey Evstifeev

The paper proposes a method and describes a mathematical model for express analysis of the attractiveness of the operation of vehicles running on natural gas for a motor transport company. The proposed solution is based on a logistic regression scoring model used by banks to assess the creditworthiness of a borrower. To improve the quality of the results, the model is extended with a set of expert restrictions formulated in the form of rules. During the analysis, signs were identified that require quantization, since individual intervals of values ??turned out to be associated with risk in different ways. The developed mathematical model is implemented in the form of software in a high-level programming language, the information of the model is stored in a database management system and is integrated with an information system for supporting management decisions when operating vehicles on natural gas. The developed athematical model was tested on a test training sample. The test results showed a satisfactory accuracy of the proposed model at the level of 77 % without the use of expert restrictions and 79 % with their use. At the same time, the share of Type II errors was 2.7 %, and Type I errors were 7.2 %, which indicates that the model is quite conservative, and a relatively high proportion of vehicles that meet the requirements were rejected.


Risks ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 200
Author(s):  
Youssef Zizi ◽  
Amine Jamali-Alaoui ◽  
Badreddine El Goumi ◽  
Mohamed Oudgou ◽  
Abdeslam El Moudden

In the face of rising defaults and limited studies on the prediction of financial distress in Morocco, this article aims to determine the most relevant predictors of financial distress and identify its optimal prediction models in a normal Moroccan economic context over two years. To achieve these objectives, logistic regression and neural networks are used based on financial ratios selected by lasso and stepwise techniques. Our empirical results highlight the significant role of predictors, namely interest to sales and return on assets in predicting financial distress. The results show that logistic regression models obtained by stepwise selection outperform the other models with an overall accuracy of 93.33% two years before financial distress and 95.00% one year prior to financial distress. Results also show that our models classify distressed SMEs better than healthy SMEs with type I errors lower than type II errors.


2021 ◽  
Author(s):  
Vitalis K. Lagat ◽  
Guillaume Latombe ◽  
Cang Hui

Community structure is determined by the interplay among different processes, including biotic interactions, abiotic filtering, and dispersal. Their effects can be detected by comparing observed patterns of co-occurrence between different species (e.g. C-score and the natural metric) to patterns generated by null models based on permutations of species-by-site matrices under constraints on row or column sums. These comparisons enable us to detect significant signals of species association or dissociation, from which the type of biotic interactions between species (e.g. facilitative or antagonistic) can be inferred. Commonly used patterns are based on the levels of co-occurrence between randomly paired species. The level of co-occurrence for three or more species is rarely considered, ignoring the potential existence of functional guilds or motifs composed of multiple species within the community. Null model tests that do not consider multi-species co-occurrence could therefore generate false negatives (Type II error) in detecting non-random forces at play that would only be apparent for such guilds. Here, we propose a multi-species co-occurrence index (hereafter, joint occupancy) that measures the number of sites jointly occupied by multiple species simultaneously, of which the pairwise metric of co-occurrence is a special case. Using this joint occupancy index along with standard permutation algorithms for null model testing, we illustrate nine archetypes of multi-species co-occurrence and explore how frequent they are in the seminal database of 289 species-by-site community matrices published by Atmar and Patterson in 1995. We show that null model testing using pairwise co-occurrence metrics could indeed lead to severe Type II errors in one specific archetype, accounting for 2.4% of the tested community matrices.


2021 ◽  
pp. 348-353
Author(s):  
Raquel V. Oliveira
Keyword(s):  
Type I ◽  
Type Ii ◽  

2021 ◽  
Author(s):  
Antonia Vehlen ◽  
William Standard ◽  
Gregor Domes

Advances in eye tracking technology have enabled the development of interactive experimental setups to study social attention. Since these setups differ substantially from the eye tracker manufacturer’s test conditions, validation is essential with regard to data quality and other factors potentially threatening data validity. In this study, we evaluated the impact of data accuracy and areas of interest (AOIs) size on the classification of simulated gaze data. We defined AOIs of different sizes using the Limited-Radius Voronoi-Tessellation (LRVT) method, and simulated gaze data for facial target points with varying data accuracy. As hypothesized, we found that data accuracy and AOI size had strong effects on gaze classification. In addition, these effects were not independent and differed for falsely classified gaze inside AOIs (Type I errors) and falsely classified gaze outside the predefined AOIs (Type II errors). The results indicate that smaller AOIs generally minimize false classifications as long as data accuracy is good enough. For studies with lower data accuracy, Type II errors can still be compensated to some extent by using larger AOIs, but at the cost of an increased probability of Type I errors. Proper estimation of data accuracy is therefore essential for making informed decisions regarding the size of AOIs.


Author(s):  
Amin Mohebbi ◽  
Simin Akbariyeh

Nitrogen and phosphorous support the ecosystem by supplying nutrients to algae and aquatic plants. Having them in excess results in the eutrophication of waters creating quality problems. In the past, nitrogen has been widely investigated for wells in the context of groundwater flow. However, a national-scale nitrogen assessment in rivers and streams has not received enough attention. In this research, the Wilcoxon rank sum test, as a non-parametric hypothesis testing method, has been applied to nitrogen concentration in the form of nitrate-nitrogen and nitrite-nitrogen in rivers and streams of the Contiguous United States. This approach was particularly selected because of the non-normal and positively skewed nitrogen levels occurring in the surface flow. This method was able to identify the impaired body of waters as well as quantify the confidence, significance, and errors involved. The Northern Appalachians (NAP), Northern Plains (NPL), and Xeric (XER) ecoregions were worsening in the nitrogen-nitrate condition with NAP, and XER needed immediate actions. The nitrite-nitrogen condition did not pose an immediate threat, so mitigation plans should focus more on nitrate-nitrogen remediation. It was shown that the method was superior to the two-sample t-test by yielding lower type II errors.


Author(s):  
Dieter Lukas ◽  
Mary Towner ◽  
Monique Borgerhoff Mulder

Phylogenetic analyses increasingly take centre-stage in our understanding of the processes shaping patterns of cultural diversity and cultural evolution over time. Just as biologists explain the origins and maintenance of trait differences among organisms using phylogenetic methods, so anthropologists studying cultural macroevolutionary processes use phylogenetic methods to uncover the history of human populations and the dynamics of culturally transmitted traits. In this paper, we revisit concerns with the validity of these methods. Specifically, we use simulations to reveal how properties of the sample (size, missing data), properties of the tree (shape) and properties of the traits (rate of change, number of variants, transmission mode) might influence the inferences that can be drawn about trait distributions across a given phylogeny and the power to discern alternative histories. Our approach shows that in two example datasets specific combinations of properties of the sample, of the tree and of the trait can lead to potentially high rates of Type I and Type II errors. We offer this simulation tool to help assess the potential impact of this list of persistent perils in future cultural macroevolutionary work. This article is part of the theme issue ‘Foundations of cultural evolution’.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Shaheen Syed ◽  
Bente Morseth ◽  
Laila A. Hopstock ◽  
Alexander Horsch

AbstractTo date, non-wear detection algorithms commonly employ a 30, 60, or even 90 mins interval or window in which acceleration values need to be below a threshold value. A major drawback of such intervals is that they need to be long enough to prevent false positives (type I errors), while short enough to prevent false negatives (type II errors), which limits detecting both short and longer episodes of non-wear time. In this paper, we propose a novel non-wear detection algorithm that eliminates the need for an interval. Rather than inspecting acceleration within intervals, we explore acceleration right before and right after an episode of non-wear time. We trained a deep convolutional neural network that was able to infer non-wear time by detecting when the accelerometer was removed and when it was placed back on again. We evaluate our algorithm against several baseline and existing non-wear algorithms, and our algorithm achieves a perfect precision, a recall of 0.9962, and an F1 score of 0.9981, outperforming all evaluated algorithms. Although our algorithm was developed using patterns learned from a hip-worn accelerometer, we propose algorithmic steps that can easily be applied to a wrist-worn accelerometer and a retrained classification model.


Sign in / Sign up

Export Citation Format

Share Document