error type
Recently Published Documents


TOTAL DOCUMENTS

242
(FIVE YEARS 87)

H-INDEX

22
(FIVE YEARS 3)

PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0261636
Author(s):  
Yasuhiro Otaki ◽  
Naofumi Fujishiro ◽  
Yasuaki Oyama ◽  
Naoko Hata ◽  
Daisuke Kato ◽  
...  

Background To prevent recurrence of medical accidents, the Medical Accident Investigating System was implemented in October 2015 by the Japan Medical Safety Research Organization (Medsafe Japan) to target deaths from medical care that were unforeseen by the administrator. Medsafe Japan analyzed the 10 cases of central venous catheterization-related deaths reported in the system and published recommendations in March 2017. However, the particular emphasis for the prevention of central venous catheterization-related deaths is unclear. Methods This study aimed to identify the recommendation points that should be emphasized to prevent recurrence of central venous catheterization-related deaths. We assessed central venous catheterization in 8530 closed-claim cases between January 2002 and December 2016 covered by the medical insurer Sompo-Japan. Moreover, we compared central venous catheterization-related death in closed-claim cases with death in reported cases. Results The background, error type, anatomic insertion site, and fatal complication data were evaluated for 37 closed-claim cases, of which 12 (32.4%) were death cases. Of the 12 closed-claim cases and 10 reported cases, 9 (75.0%) closed-claim cases and 9 (90.0%) reported cases were related to vascular access. Among these, 5 closed-claim cases (41.7%) and 7 reported cases (77.8%) were related to internal jugular vein catheterization (p = 0.28). Coagulopathy was observed in 3 (60.0%) of 5 closed-claim cases and 6 (85.7%) of 7 reported cases. Conclusions The risk of internal jugular catheterization in patients with coagulopathy must be carefully considered.


2022 ◽  
Author(s):  
Akshay Markanday ◽  
Sungho Hong ◽  
Junya Inoue ◽  
Erik De Schutter ◽  
Peter Thier

Both the environment and our body keep changing dynamically. Hence, ensuring movement precision requires adaptation to multiple demands occurring simultaneously. Here we show that the cerebellum performs the necessary multi-dimensional computations for the flexible control of different movement parameters depending on the prevailing context. This conclusion is based on the identification of a manifold-like activity in both mossy fibers (MF, network input) and Purkinje cells (PC, output), recorded from monkeys performing a saccade task. Unlike MFs, the properties of PC manifolds developed selective representations of individual movement parameters. Error feedback-driven climbing fiber input modulated the PC manifolds to predict specific, error type-dependent changes in subsequent actions. Furthermore, a feed-forward network model that simulated MF-to-PC transformations revealed that amplification and restructuring of the lesser variability in the MF activity is a pivotal circuit mechanism. Therefore, flexible control of movement by the cerebellum crucially depends on its capacity for multi-dimensional computations.


2022 ◽  
Vol 6 (1) ◽  
Author(s):  
Agus Munandar

This study aims to determine the degree of accuracy of the Altman Z-score method in predicting the bankruptcy of a company. This study uses quantitative research with a descriptive approach through accuracy and error type tests. Where the samples used are companies that are members of the coal mining sub-sector during the 2015-2019 period. Purposive sampling method was used in sampling with a total sample of 19 companies. The Altman Z-score method has an accuracy rate and type error of 11% and 42%, which indicate that the method is not good for use in companies that are members of the coal mining sub-sector.


Information ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 505
Author(s):  
Jarmila Horváthová ◽  
Martina Mokrišová ◽  
Igor Petruška

This paper focuses on the financial health prediction of businesses. The issue of predicting the financial health of companies is very important in terms of their sustainability. The aim of this paper is to determine the financial health of the analyzed sample of companies and to distinguish financially healthy companies from companies which are not financially healthy. The analyzed sample, in the field of heat supply in Slovakia, consisted of 444 companies. To fulfil the aim, appropriate financial indicators were used. These indicators were selected using related empirical studies, a univariate logit model and a correlation matrix. In the paper, two main models were applied—multivariate discriminant analysis (MDA) and feed-forward neural network (NN). The classification accuracy of the constructed models was compared using the confusion matrix, error type 1 and error type 2. The performance of the models was compared applying Brier score and Somers’ D. The main conclusion of the paper is that the NN is a suitable alternative in assessing financial health. We confirmed that high indebtedness is a predictor of financial distress. The benefit and originality of the paper is the construction of an early warning model for the Slovak heating industry. From our point of view, the heating industry works in the similar way in other countries, especially in transition economies; therefore, the model is applicable in these countries as well.


Author(s):  
Anny Castilla-Earls ◽  
David J. Francis ◽  
Aquiles Iglesias

Purpose: This study examined the relationship between utterance length, syntactic complexity, and the probability of making an error at the utterance level. Method: The participants in this study included 830 Spanish-speaking first graders who were learning English at school. Story retells in both Spanish and English were collected from all children. Generalized mixed linear models were used to examine within-child and between-children effects of utterance length and subordination on the probability of making an error at the utterance level. Results: The relationship between utterance length and grammaticality was found to differ by error type (omission vs. commission), language (Spanish vs. English), and level of analysis (within-child vs. between-children). For errors of commission, the probability of making an error increased as a child produced utterances that were longer relative to their average utterance length (within-child effect). Contrastively, for errors of omission, the probability of making an error decreased when a child produced utterances that were longer relative to their average utterance length (within-child effect). In English, a child who produced utterances that were, on average, longer than the average utterance length for all children produced more errors of commission and fewer errors of omission (between-children effect). This between-children effect was similar in Spanish for errors of commission but nonsignificant for errors of omission. For both error types, the within-child effects of utterance length were moderated by the use of subordination. Conclusion: The relationship between utterance length and grammaticality is complex and varies by error type, language, and whether the frame of reference is the child's own language (within-child effect) or the language of other children (between-children effect). Supplemental Material https://doi.org/10.23641/asha.17035916


SinkrOn ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. 183-190
Author(s):  
Emmy Erwina ◽  
Tommy Tommy ◽  
Mayasari Mayasari

Spelling error has become an error that is often found in this era which can be seen from the use of words that tend to follow trends or culture, especially in the younger generation. This study aims to develop and test a detection and identification model using a combination of Bigram Vector and Minimum Edit Distance Based Probabilities. Correct words from error words are obtained using candidates search and probability calculations that adopt the concept of minimum edit distance. The detection results then identified the error type into three types of errors, namely vowels, consonants and diphthongs from the error side on the tendency of the characters used as a result of phonemic rendering at the time of writing. The results of error detection and identification of error types obtained are quite good where most of the error test data can be detected and identified according to the type of error, although there are several detection errors by obtaining more than one correct word as a result of the same probability value of these words.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Daye Diana Choi ◽  
Dae Hee Kim ◽  
Ungsoo Samuel Kim ◽  
Seung-Hee Baek

AbstractTo investigate the factors for treatment success in anisometropic amblyopia according to the spherical equivalent (SE) type of amblyopic eyes. Medical records of 397 children with anisometropic amblyopia aged 3 to 12 years who presented in a secondary referral eye hospital during 2010 ~ 2016 were retrospectively reviewed. Anisometropia was defined as ≥ 1 diopter (D) difference in SE, or ≥ 1.5 D difference of cylindrical error between the eyes. According to the SE of amblyopic eyes, patients were categorized into hyperopia (SE ≥ 1D), emmetropia (− 1 < SE <  + 1) and myopia (SE ≤ − 1D) groups. Treatment success was defined as achieving interocular logMAR visual acuity difference < 0.2. Multivariate logistic regression was used to analyze the factors for treatment success. Significant factors for the amblyopia treatment success in hyperopia group (n = 270) were younger age [adjusted odds ratio (aOR) (95% confidence interval, CI) = 0.529 (0.353, 0.792)], better BCVA in amblyopic eyes at presentation [aOR (95% CI) 0.004 (0, 0.096)], longer follow-up period [aOR (95%CI) = 1.098 (1.036, 1.162)], and no previous amblyopia treatment history [aOR (95% CI) 0.059 (0.010, 0.364)]. In myopia group (n = 68), younger age [aOR (95% CI) 0.440 (0.208, 0.928)] and better BCVA in amblyopic eyes [aOR (95% CI) 0.034 (0.003, 0.469)] were associated with higher odds of treatment success. There was no significant factor for treatment success in emmetropia group (n = 59) in this population. The refractive error type of amblyopic eyes at presentation affects the factors for treatment success in anisometropic amblyopia.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0259667
Author(s):  
U. S. H. Gamage ◽  
Tim Adair ◽  
Lene Mikkelsen ◽  
Pasyodun Koralage Buddhika Mahesh ◽  
John Hart ◽  
...  

Background Correct certification of cause of death by physicians (i.e. completing the medical certificate of cause of death or MCCOD) and correct coding according to International Classification of Diseases (ICD) rules are essential to produce quality mortality statistics to inform health policy. Despite clear guidelines, errors in medical certification are common. This study objectively measures the impact of different medical certification errors upon the selection of the underlying cause of death. Methods A sample of 1592 error-free MCCODs were selected from the 2017 United States multiple cause of death data. The ten most common types of errors in completing the MCCOD (according to published studies) were individually simulated on the error-free MCCODs. After each simulation, the MCCODs were coded using Iris automated mortality coding software. Chance-corrected concordance (CCC) was used to measure the impact of certification errors on the underlying cause of death. Weights for each error type and Socio-demographic Index (SDI) group (representing different mortality conditions) were calculated from the CCC and categorised (very high, high, medium and low) to describe their effect on cause of death accuracy. Findings The only very high impact error type was reporting an ill-defined condition as the underlying cause of death. High impact errors were found to be reporting competing causes in Part 1 [of the death certificate] and illegibility, with medium impact errors being reporting underlying cause in Part 2 [of the death certificate], incorrect or absent time intervals and reporting contributory causes in Part 1, and low impact errors comprising multiple causes per line and incorrect sequence. There was only small difference in error importance between SDI groups. Conclusions Reporting an ill-defined condition as the underlying cause of death can seriously affect the coding outcome, while other certification errors were mitigated through the correct application of mortality coding rules. Training of physicians in not reporting ill-defined conditions on the MCCOD and mortality coders in correct coding practices and using Iris should be important components of national strategies to improve cause of death data quality.


2021 ◽  
Vol 12 (5) ◽  
pp. 1-51
Author(s):  
Yu Wang ◽  
Yuelin Wang ◽  
Kai Dang ◽  
Jie Liu ◽  
Zhuo Liu

Grammatical error correction (GEC) is an important application aspect of natural language processing techniques, and GEC system is a kind of very important intelligent system that has long been explored both in academic and industrial communities. The past decade has witnessed significant progress achieved in GEC for the sake of increasing popularity of machine learning and deep learning. However, there is not a survey that untangles the large amount of research works and progress in this field. We present the first survey in GEC for a comprehensive retrospective of the literature in this area. We first give the definition of GEC task and introduce the public datasets and data annotation schema. After that, we discuss six kinds of basic approaches, six commonly applied performance boosting techniques for GEC systems, and three data augmentation methods. Since GEC is typically viewed as a sister task of Machine Translation (MT), we put more emphasis on the statistical machine translation (SMT)-based approaches and neural machine translation (NMT)-based approaches for the sake of their importance. Similarly, some performance-boosting techniques are adapted from MT and are successfully combined with GEC systems for enhancement on the final performance. More importantly, after the introduction of the evaluation in GEC, we make an in-depth analysis based on empirical results in aspects of GEC approaches and GEC systems for a clearer pattern of progress in GEC, where error type analysis and system recapitulation are clearly presented. Finally, we discuss five prospective directions for future GEC researches.


2021 ◽  
Author(s):  
Robert Logan ◽  
Zoe Fleischmann ◽  
Sofia Annis ◽  
Amy Wehe ◽  
Jonathan L. Tilly ◽  
...  

Abstract Background:Third-generation sequencing offers some advantages over next-generation sequencing predecessors, but with the caveat of harboring a much higher error rate. Clustering-related sequences is an essential task in modern biology. To accurately cluster sequences rich in errors, error type and frequency need to be accounted for. Levenshtein distance is a well-established mathematical algorithm for measuring the edit distance between words and can specifically weight insertions, deletions and substitutions. However, there are drawbacks to using Levenshtein distance in a biological context and hence have rarely been used for this purpose. We present novel modifications to the Levenshtein distance algorithm to optimize it for clustering error-rich biological sequencing data.Results: We successfully introduced a bidirectional frameshift allowance with end-user determined accommodation caps combined with weighted error discrimination. Furthermore, our modifications dramatically improved the computational speed of Levenstein distance. For simulated ONT MinION and PacBio Sequel datasets, the average clustering sensitivity for 3GOLD was 41.45% (S.D. 10.39) higher than Sequence-Levenstein distance, 52.14% (S.D. 9.43) higher than Levenshtein distance, 55.93% (S.D. 8.67) higher than Starcode, 42.68% (S.D. 8.09) higher than CD-HIT-EST and 61.49% (S.D. 7.81) higher than DNACLUST. For biological ONT MinION data, 3GOLD clustering sensitivity was 27.99% higher than Sequence-Levenstein distance, 52.76% higher than Levenshtein distance, 56.39% higher than Starcode, 48% higher than CD-HIT-EST and 70.4% higher than DNACLUST.Conclusion:Our modifications to Levenshtein distance have improved its speed and accuracy compared to the classic Levenshtein distance, Sequence-Levenshtein distance and other commonly used clustering approaches on simulated and biological third-generation sequenced datasets. Our clustering approach is appropriate for datasets of unknown cluster centroids, such as those generated with unique molecular identifiers as well as known centroids such as barcoded datasets. A strength of our approach is high accuracy in resolving small clusters and mitigating the number of singletons.


Sign in / Sign up

Export Citation Format

Share Document