general error
Recently Published Documents


TOTAL DOCUMENTS

103
(FIVE YEARS 11)

H-INDEX

18
(FIVE YEARS 1)

Author(s):  
Patrick W. Kraft ◽  
Ellen M. Key ◽  
Matthew J. Lebo

Abstract Grant and Lebo (2016) and Keele et al. (2016) clarify the conditions under which the popular general error correction model (GECM) can be used and interpreted easily: In a bivariate GECM the data must be integrated in order to rely on the error correction coefficient, $\alpha _1^\ast$ , to test cointegration and measure the rate of error correction between a single exogenous x and a dependent variable, y. Here we demonstrate that even if the data are all integrated, the test on $\alpha _1^\ast$ is misunderstood when there is more than a single independent variable. The null hypothesis is that there is no cointegration between y and any x but the correct alternative hypothesis is that y is cointegrated with at least one—but not necessarily more than one—of the x's. A significant $\alpha _1^\ast$ can occur when some I(1) regressors are not cointegrated and the equation is not balanced. Thus, the correct limiting distributions of the right-hand-side long-run coefficients may be unknown. We use simulations to demonstrate the problem and then discuss implications for applied examples.


2021 ◽  
Author(s):  
Christopher Dennis

Error graphs are a useful mathematical tool for representing failing interactions in a system. This representation is used as the basis for constructing an error locating array (ELA). However, if too many errors are present in a given error graph, it may not be possible to locate all interactions. We say that a graph is locatable if an ELA can be built. Bounds on the total size of an error graph are known, bounds on the degree an error graph can have have not been considered. In this thesis we explore the maximum degree an error graph may have while still guaranteeing its locatability. We consider special cases for 3 and 4 partite error graphs as well as developing bounds on the degree of a general error graph. We describe a linear time algorithm which can be used to generate tests which have at most one failing interaction.


2021 ◽  
Author(s):  
Christopher Dennis

Error graphs are a useful mathematical tool for representing failing interactions in a system. This representation is used as the basis for constructing an error locating array (ELA). However, if too many errors are present in a given error graph, it may not be possible to locate all interactions. We say that a graph is locatable if an ELA can be built. Bounds on the total size of an error graph are known, bounds on the degree an error graph can have have not been considered. In this thesis we explore the maximum degree an error graph may have while still guaranteeing its locatability. We consider special cases for 3 and 4 partite error graphs as well as developing bounds on the degree of a general error graph. We describe a linear time algorithm which can be used to generate tests which have at most one failing interaction.


2021 ◽  
Author(s):  
Peter D Maskell

It has long been known that forensic and clinical toxicologists should not determine the dose of a drug administered based on post-mortem blood drug concentrations but to date there has been limited information as to how unreliable these dose calculations can be. Using amitriptyline as a model drug this study used the empirically determined pharmacokinetic variables for amitriptyline from clinical studies and clinical, overdose (where the individual survived) and death (ascribed to amitriptyline toxicity) case studies in which the dose of drug administered or taken was known. Using these data, standard pharmacokinetic equations and general error propagation it was possible to estimate the accuracy of the consumed dose of amitriptyline compared to the actual dose consumed. As was expected in postmortem cases, depending on the pharmacokinetic equation used, the accuracy (mean +128 to +2347 %) and precision (SD ± 383 to 3698%) were too large to allow reliable estimation of the dose of drug taken or administered prior to death based on postmortem blood drug concentrations. This work again reinforces that dose calculations from post-mortem blood drug concentrations should not be carried out.


2021 ◽  
Vol 14 (2) ◽  
pp. 179-208
Author(s):  
Antonio P. Gutierrez de Blume ◽  
Gregory Schraw ◽  
Fred Kuch ◽  
Aaron S Richmond

Gutierrez et al. (2016) conducted an experiment that provided evidence for the existence of two distinct factors in metacognitive monitoring: general accuracy and general error. They found level-1 domain-specific accuracy and error factors which loaded on second-order domain-general accuracy and error factors, which then loaded on a third-order general monitoring factor. In the present study, that experiment was repeated with 170 different participants from the same population. The present study confirmed the original findings. Both studies suggest that metacognitive monitoring consists of two different types of cognitive processes: one that is associated with accurate monitoring judgments and one that is associated with error in monitoring judgments. In addition, both studies suggest domain-specific accuracy and error factors which load onto second-order domain-general accuracy and error factors. Furthermore, in this study we devised an experiment in which general accuracy and general error are treated as separate latent dimensions and found that subjects employ the same resources they use to develop accurate judgments as a “baseline” for calibrating resources necessary in erroneous judgments, but not vice-versa. This finding supports and extends previous findings which suggests that the processes involved in managing metacognitive accuracy are different from those involved in contending with metacognitive error. Future instructional interventions in metacognitive monitoring will be better focused by concentrating on improving accuracy or reducing error, but not both concurrently.


2020 ◽  
Vol 33 (02) ◽  
pp. 608-615
Author(s):  
Ilyas Idrisovich Ismagilov ◽  
Ajgul Ilshatovna Sabirova ◽  
Dina Vladimirovna Kataseva ◽  
Alexey Sergeevich Katasev

This article solves the problem of collection scoring models constructing and researching. The relevance of solving this problem on the intelligent modeling technologies basis: decision trees, logistic regression and neural networks is noted. The initial data for the models was a set of 14 columns and 5779 rows. The models construction was performed in Deductor platform. Each model was tested on the set of 462 records. For all models, the corresponding classification matrix were constructed and the1st and 2nd kind errors were calculated, as well as the general error of the models. In terms of minimizing these errors, logistic regression showed the worst results, and the neural network showed the best. In addition, the constructed models effectiveness was evaluated according to «income» and «time» criteria. By the time costs the logistic regression model exceeds other models. However, in terms of income the neural network model was the best. Thus, the results showed that in order to minimize the time spent on work with debtors it is advisable to use a logistic model. However, to maximize profits and minimize classification errors, it is appropriate to use a neural network model. This indicates its effectiveness and practical use possibility in intelligent scoring systems.


2020 ◽  
Vol 19 (11) ◽  
Author(s):  
Manpreet Singh Jattana ◽  
Fengping Jin ◽  
Hans De Raedt ◽  
Kristel Michielsen

AbstractA general method to mitigate the effect of errors in quantum circuits is outlined. The method is developed in sight of characteristics that an ideal method should possess and to ameliorate an existing method which only mitigates state preparation and measurement errors. The method is tested on different IBM Q quantum devices, using randomly generated circuits with up to four qubits. A large majority of results show significant error mitigation.


2020 ◽  
Vol 3 (2) ◽  
pp. 173-183
Author(s):  
I Komang Sesara Ariyana

This study aimed to identify the errors of PGSD students on simple algebra assignments on Basic Concept of Elementary Mathematics Subject at STAHN Mpu Kuturan Singaraja. The subjects of this study were Semester II PGSD students of STAHN Mpu Kuturan Singaraja in the academic year 2019/2020 as many as 11 people. Errors in mathematics in this study were divided into factual errors, procedural errors, and conceptual errors. This type of research was a quantitative descriptive study. The data collection method used was a test. The test instrument used was a concept understanding test to be able to find students' errors. A total of 20 items in the test were validated by two experts with the Lawshe’s CVR technique. The results showed that (1) the general error rate of PGSD study program students on Simple Algebra assignment on Basic Concept of Elementary Mathematics Subject was 30.26% in the low error category, (2) there was the highest error of 43% (error category moderate) and lowest 16% (very low error category), 3) factual errors ranged from 10% - 20% (very low category), 4) procedural errors ranged from 7% - 53% (very low category to medium category), and 5) conceptual errors ranged from 35% - 65% (low to high categories).


Sign in / Sign up

Export Citation Format

Share Document