error classification
Recently Published Documents


TOTAL DOCUMENTS

115
(FIVE YEARS 35)

H-INDEX

12
(FIVE YEARS 3)

2022 ◽  
Vol 3 (1) ◽  
pp. 55-65
Author(s):  
Dea Rahmanita Ayuningtyas ◽  
Lailatul Karimah ◽  
Silvi Intan Cahyaningsih ◽  
Chafit Ulya

This study aims to analyze language errors and their interpretations at the level of syntax, morphology, and Indonesian Spelling System as well as to increase knowledge and insight about how to write properly and correctly according to the language rules that have been regulated in the KBBI and PUEBI. This research uses descriptive qualitative research method. The data in this study are in the form of words (not numbers) sourced from Larise magazine in an article entitled "Philosophy of Kidungan Jawa "Ana Kidung Rumeksa ing Wengi" published on Sunday, October 11, 2020. The data collection technique in this study is a note-taking technique, namely: by reading Larise magazine as a data source. The analysis used in this study is an interactive analysis which includes the steps of a) data collection, b) error identification, c) error explanation, d) error classification, and e) error evaluation. In this study, an analysis of errors in writing rules was carried out at the level of syntax, morphology, and accuracy in the use of Indonesian Spelling (EBI). Errors at the syntactic level are in the form of errors in the use of effective sentences, errors at the morphological level are affixation errors, and errors related to Indonesian spelling include errors in using punctuation marks, using capital letters, using standard words, using prepositions, and using particles


2021 ◽  
Author(s):  
Fadi Mohammad Alsuhimat ◽  
Fatma Susilawati Mohamad

The signature process is one of the most significant processes used by organizations to preserve the security of information and protect it from unwanted penetration or access. As organizations and individuals move into the digital environment, there is an essential need for a computerized system able to distinguish between genuine and forged signatures in order to protect people's authorization and decide what permissions they have. In this paper, we used Pre-Trained CNN for extracts features from genuine and forged signatures, and three widely used classification algorithms, SVM (Support Vector Machine), NB (Naive Bayes) and KNN (k-nearest neighbors), these algorithms are compared to calculate the run time, classification error, classification loss and accuracy for test-set consist of signature images (genuine and forgery). Three classifiers have been applied using (UTSig) dataset; where run time, classification error, classification loss and accuracy were calculated for each classifier in the verification phase, the results showed that the SVM and KNN got the best accuracy (76.21), while the SVM got the best run time (0.13) result among other classifiers, therefore the SVM classifier got the best result among the other classifiers in terms of our measures.


K ta Kita ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 180-186
Author(s):  
Ellara Yusea Ananda ◽  
Henny Putri Saking Wijaya

This study was conducted to find out the types of lexical errors that high and low proficiency learners produced in their writing, and find the differences and similarities from the production of the two groups. To answer the research questions, the writers used the theory by James (2013) about lexical error classification. This study was a qualitative study. The sources of the data of this study were the lexical errors from 32 writing drafts. The writers divided the students into two groups: eight students from high proficiency level and eight students from low proficiency level. The findings showed that the high proficiency learners produced five types of formal errors and four types of semantic errors, while the low proficiency learners produced two types of formal errors and six types of semantic errors. In conclusion, the high and the low proficiency learners’ lexical error productions are due to the learners’ lack of knowledge in sense relation and collocation, and the learners’ wrong terms of near-synonyms production.Keywords: lexical errors, high proficiency learners, low proficiency learners, error analysis


2021 ◽  
Author(s):  
Zheng Li ◽  
Fuxiang Sun ◽  
Haifeng Wang ◽  
Yifan Ding ◽  
Yong Liu ◽  
...  

2021 ◽  
Author(s):  
Mark O'Rahelly ◽  
Michael McDermott ◽  
Martina Healy

Abstract Objective: 1) Review ante- and post-mortem diagnoses and assign a Goldman error classification. 2) Establish autopsy rates. Design: A retrospective analysis of autopsies performed on patients who died in Paediatric intensive care unit (PICU) between November 13th 2012 and October 31st 2018. We reviewed medical and autopsy data of all patients and Goldman classification of discrepancy between ante- and post-mortem diagnoses was assigned. Setting: Tertiary PICU. Patients: All patients that died in PICU within the designated timeframe. Interventions: Goldman error classification assignment. Measurements and main results: 396 deaths occurred in PICU from 8,329 (4.75%) admissions. 99 (25%) had an autopsy, 75 required by the coroner. All were included in the study. Fifty-three were male and 46 females. Fifty-three patients were transfers from external hospitals, 46 from our centre. Forty-one were neonates, 32 were <1 year of age, and 26 were >1 year of age. Median length of stay was 3 days. Eighteen were post cardiac surgery, and three post cardiac catheter procedure. Major diagnostic errors (Class I/II) were identified in 14 (14.1%), 2 (2%) Class I, and 12 (12.1%) were Class II errors. Class III and IV errors occurred in 28 (28.2%) patients. Complete concordance (Class V) occurred in 57 (57.5%) cases. Conclusion: The autopsy rate and the diagnostic discrepancy rate within our PICU is comparable to those previously reported. Our findings show the continuing value of autopsy in determining cause of death and providing greater diagnostic clarity. Given their value, post-mortem examinations, where indicated, should be considered part of a physician’s duty of care to families and future patients.


Author(s):  
Irene Rivera-Trigueros

AbstractNowadays, in the globalised context in which we find ourselves, language barriers can still be an obstacle to accessing information. On occasions, it is impossible to satisfy the demand for translation by relying only in human translators, therefore, tools such as Machine Translation (MT) are gaining popularity due to their potential to overcome this problem. Consequently, research in this field is constantly growing and new MT paradigms are emerging. In this paper, a systematic literature review has been carried out in order to identify what MT systems are currently most employed, their architecture, the quality assessment procedures applied to determine how they work, and which of these systems offer the best results. The study is focused on the specialised literature produced by translation experts, linguists, and specialists in related fields that include the English–Spanish language combination. Research findings show that neural MT is the predominant paradigm in the current MT scenario, being Google Translator the most used system. Moreover, most of the analysed works used one type of evaluation—either automatic or human—to assess machine translation and only 22% of the works combined these two types of evaluation. However, more than a half of the works included error classification and analysis, an essential aspect for identifying flaws and improving the performance of MT systems.


2021 ◽  
pp. 175319342110029
Author(s):  
Peter Cay ◽  
Brook Leung ◽  
Keegan Curlewis ◽  
Andrew Stone ◽  
Tom Roper ◽  
...  

Quotation error is an inaccuracy in the assertions made by authors when referencing another’s work. This study aimed to assess the quotation errors in articles referencing the Distal Radius Acute Fracture Fixation Trial (DRAFFT). A literature search was performed to identify all citations of DRAFFT from 2014 to 2020. The relevant publications were assessed by two reviewers using a validated framework of error classification. There were 83 articles containing references to DRAFFT. There was substantial agreement between the two reviewers (Kappa coefficient 0.66). We found 22/83 (28%) of articles contained an error, with one article containing two errors. There were 12 major errors, which were not substantiated by, were unrelated to or contradicted the findings of DRAFFT, and 11 minor errors, including numerical inaccuracies, oversimplification or generalization. This study highlights that a significant number of articles inaccurately quote DRAFFT. Authors and journals should consider checking the accuracy of key referenced statements.


Work ◽  
2021 ◽  
pp. 1-10
Author(s):  
Hai Tao ◽  
MdArafatur Rahman ◽  
Wang Jing ◽  
Yafeng Li ◽  
Jing Li ◽  
...  

BACKGROUND: Human-Robot Interaction (HRI) is becoming a current research field for providing granular real-time applications and services through physical observation. Robotic systems are designed to handle the roles of humans and assist them through intrinsic sensing and commutative interactions. These systems handle inputs from multiple sources, process them, and deliver reliable responses to the users without delay. Input analysis and processing is the prime concern for the robotic systems to understand and resolve the queries of the users. OBJECTIVES: In this manuscript, the Interaction Modeling and Classification Scheme (IMCS) is introduced to improve the accuracy of HRI. This scheme consists of two phases, namely error classification and input mapping. In the error classification process, the input is analyzed for its events and conditional discrepancies to assign appropriate responses in the input mapping phase. The joint process is aided by a linear learning model to analyze the different conditions in the event and input detection. RESULTS: The performance of the proposed scheme shows that it is capable of improving the interaction accuracy by reducing the ratio of errors and interaction response by leveraging the information extraction from the discrete and successive human inputs. CONCLUSION: The fetched data are analyzed by classifying the errors at the initial stage to achieve reliable responses.


Sign in / Sign up

Export Citation Format

Share Document