scholarly journals Automatic Short Answer Grading System in Indonesian Language Using BERT Machine Learning

2021 ◽  
Vol 35 (6) ◽  
pp. 503-509
Author(s):  
Marvin Chandra Wijaya

A system capable of automatically grading short answers is a very useful tool. The system can be created using machine learning algorithms. In this study, a machine system using BERT is proposed. BERT is an open-source system that is set to English by default. The use of languages other than English Language is a challenge to be implemented in BERT. This study proposes a novel system to implement Indonesian Language in the BERT system for automatic grading of short answers. The experimental results were measured using two measuring instruments: Cohen's Kappa coefficient and the Confusion Matrix. The result of measuring the BERT output of the implemented system has a Cohen Kappa coefficient of 0.75, a precision of 0.94, a recall of 0.96, a Specificity of 0.76 and an F1 Score of 0.95. Based on the measurement results, it can be seen that the implementation of the automatic short answer grading system in Indonesian Language using BERT machine learning has been successful.

2021 ◽  
Vol 10 (2) ◽  
pp. 58
Author(s):  
Muhammad Fawad Akbar Khan ◽  
Khan Muhammad ◽  
Shahid Bashir ◽  
Shahab Ud Din ◽  
Muhammad Hanif

Low-resolution Geological Survey of Pakistan (GSP) maps surrounding the region of interest show oolitic and fossiliferous limestone occurrences correspondingly in Samanasuk, Lockhart, and Margalla hill formations in the Hazara division, Pakistan. Machine-learning algorithms (MLAs) have been rarely applied to multispectral remote sensing data for differentiating between limestone formations formed due to different depositional environments, such as oolitic or fossiliferous. Unlike the previous studies that mostly report lithological classification of rock types having different chemical compositions by the MLAs, this paper aimed to investigate MLAs’ potential for mapping subclasses within the same lithology, i.e., limestone. Additionally, selecting appropriate data labels, training algorithms, hyperparameters, and remote sensing data sources were also investigated while applying these MLAs. In this paper, first, oolitic (Samanasuk), fossiliferous (Lockhart and Margalla) limestone-bearing formations along with the adjoining Hazara formation were mapped using random forest (RF), support vector machine (SVM), classification and regression tree (CART), and naïve Bayes (NB) MLAs. The RF algorithm reported the best accuracy of 83.28% and a Kappa coefficient of 0.78. To further improve the targeted allochemical limestone formation map, annotation labels were generated by the fusion of maps obtained from principal component analysis (PCA), decorrelation stretching (DS), X-means clustering applied to ASTER-L1T, Landsat-8, and Sentinel-2 datasets. These labels were used to train and validate SVM, CART, NB, and RF MLAs to obtain a binary classification map of limestone occurrences in the Hazara division, Pakistan using the Google Earth Engine (GEE) platform. The classification of Landsat-8 data by CART reported 99.63% accuracy, with a Kappa coefficient of 0.99, and was in good agreement with the field validation. This binary limestone map was further classified into oolitic (Samanasuk) and fossiliferous (Lockhart and Margalla) formations by all the four MLAs; in this case, RF surpassed all the other algorithms with an improved accuracy of 96.36%. This improvement can be attributed to better annotation, resulting in a binary limestone classification map, which formed a mask for improved classification of oolitic and fossiliferous limestone in the area.


2020 ◽  
Author(s):  
Eunjeong Park ◽  
Kijeong Lee ◽  
Taehwa Han ◽  
Hyo Suk Nam

BACKGROUND Subtle abnormal motor signs are indications of serious neurological diseases. Although neurological deficits require fast initiation of treatment in a restricted time, it is difficult for nonspecialists to detect and objectively assess the symptoms. In the clinical environment, diagnoses and decisions are based on clinical grading methods, including the National Institutes of Health Stroke Scale (NIHSS) score or the Medical Research Council (MRC) score, which have been used to measure motor weakness. Objective grading in various environments is necessitated for consistent agreement among patients, caregivers, paramedics, and medical staff to facilitate rapid diagnoses and dispatches to appropriate medical centers. OBJECTIVE In this study, we aimed to develop an autonomous grading system for stroke patients. We investigated the feasibility of our new system to assess motor weakness and grade NIHSS and MRC scores of 4 limbs, similar to the clinical examinations performed by medical staff. METHODS We implemented an automatic grading system composed of a measuring unit with wearable sensors and a grading unit with optimized machine learning. Inertial sensors were attached to measure subtle weaknesses caused by paralysis of upper and lower limbs. We collected 60 instances of data with kinematic features of motor disorders from neurological examination and demographic information of stroke patients with NIHSS 0 or 1 and MRC 7, 8, or 9 grades in a stroke unit. Training data with 240 instances were generated using a synthetic minority oversampling technique to complement the imbalanced number of data between classes and low number of training data. We trained 2 representative machine learning algorithms, an ensemble and a support vector machine (SVM), to implement auto-NIHSS and auto-MRC grading. The optimized algorithms performed a 5-fold cross-validation and were searched by Bayes optimization in 30 trials. The trained model was tested with the 60 original hold-out instances for performance evaluation in accuracy, sensitivity, specificity, and area under the receiver operating characteristics curve (AUC). RESULTS The proposed system can grade NIHSS scores with an accuracy of 83.3% and an AUC of 0.912 using an optimized ensemble algorithm, and it can grade with an accuracy of 80.0% and an AUC of 0.860 using an optimized SVM algorithm. The auto-MRC grading achieved an accuracy of 76.7% and a mean AUC of 0.870 in SVM classification and an accuracy of 78.3% and a mean AUC of 0.877 in ensemble classification. CONCLUSIONS The automatic grading system quantifies proximal weakness in real time and assesses symptoms through automatic grading. The pilot outcomes demonstrated the feasibility of remote monitoring of motor weakness caused by stroke. The system can facilitate consistent grading with instant assessment and expedite dispatches to appropriate hospitals and treatment initiation by sharing auto-MRC and auto-NIHSS scores between prehospital and hospital responses as an objective observation.


Author(s):  
Saugata Bose ◽  
Ritambhra Korpal

In this chapter, an initiative is proposed where natural language processing (NLP) techniques and supervised machine learning algorithms have been combined to detect external plagiarism. The major emphasis is on to construct a framework to detect plagiarism from monolingual texts by implementing n-gram frequency comparison approach. The framework is based on 120 characteristics which have been extracted during pre-processing steps using simple NLP approach. Afterward, filter metrics has been applied to select most relevant features and supervised classification learning algorithm has been used later to classify the documents in four levels of plagiarism. Then, confusion matrix was built to estimate the false positives and false negatives. Finally, the authors have shown C4.5 decision tree-based classifier's suitability on calculating accuracy over naive Bayes. The framework achieved 89% accuracy with low false positive and false negative rate and it shows higher precision and recall value comparing to passage similarities method, sentence similarity method, and search space reduction method.


2020 ◽  
Vol 12 (15) ◽  
pp. 5972
Author(s):  
Nicholas Fiorentini ◽  
Massimo Losa

Screening procedures in road blackspot detection are essential tools for road authorities for quickly gathering insights on the safety level of each road site they manage. This paper suggests a road blackspot screening procedure for two-lane rural roads, relying on five different machine learning algorithms (MLAs) and real long-term traffic data. The network analyzed is the one managed by the Tuscany Region Road Administration, mainly composed of two-lane rural roads. An amount of 995 road sites, where at least one accident occurred in 2012–2016, have been labeled as “Accident Case”. Accordingly, an equal number of sites where no accident occurred in the same period, have been randomly selected and labeled as “Non-Accident Case”. Five different MLAs, namely Logistic Regression, Classification and Regression Tree, Random Forest, K-Nearest Neighbor, and Naïve Bayes, have been trained and validated. The output response of the MLAs, i.e., crash occurrence susceptibility, is a binary categorical variable. Therefore, such algorithms aim to classify a road site as likely safe (“Accident Case”) or potentially susceptible to an accident occurrence (“Non-Accident Case”) over five years. Finally, algorithms have been compared by a set of performance metrics, including precision, recall, F1-score, overall accuracy, confusion matrix, and the Area Under the Receiver Operating Characteristic. Outcomes show that the Random Forest outperforms the other MLAs with an overall accuracy of 73.53%. Furthermore, all the MLAs do not show overfitting issues. Road authorities could consider MLAs to draw up a priority list of on-site inspections and maintenance interventions.


Author(s):  
RUCHIKA MALHOTRA ◽  
ANKITA JAIN BANSAL

Due to various reasons such as ever increasing demands of the customer or change in the environment or detection of a bug, changes are incorporated in a software. This results in multiple versions or evolving nature of a software. Identification of parts of a software that are more prone to changes than others is one of the important activities. Identifying change prone classes will help developers to take focused and timely preventive actions on the classes of the software with similar characteristics in the future releases. In this paper, we have studied the relationship between various object oriented (OO) metrics and change proneness. We collected a set of OO metrics and change data of each class that appeared in two versions of an open source dataset, 'Java TreeView', i.e., version 1.1.6 and version 1.0.3. Besides this, we have also predicted various models that can be used to identify change prone classes, using machine learning and statistical techniques and then compared their performance. The results are analyzed using Area Under the Curve (AUC) obtained from Receiver Operating Characteristics (ROC) analysis. The results show that the models predicted using both machine learning and statistical methods demonstrate good performance in terms of predicting change prone classes. Based on the results, it is reasonable to claim that quality models have a significant relevance with OO metrics and hence can be used by researchers for early prediction of change prone classes.


Software maintainability is a vital quality aspect as per ISO standards. This has been a concern since decades and even today, it is of top priority. At present, majority of the software applications, particularly open source software are being developed using Object-Oriented methodologies. Researchers in the earlier past have used statistical techniques on metric data extracted from software to evaluate maintainability. Recently, machine learning models and algorithms are also being used in a majority of research works to predict maintainability. In this research, we performed an empirical case study on an open source software jfreechart by applying machine learning algorithms. The objective was to study the relationships between certain metrics and maintainability.


2017 ◽  
Author(s):  
Udit Arora ◽  
Sohit Verma ◽  
Sarthak Sahni ◽  
Tushar Sharma

Several ball tracking algorithms have been reported in literature. However, most of them use high-quality video and multiple cameras, and the emphasis has been on coordinating the cameras or visualizing the tracking results. This paper aims to develop a system for assisting the umpire in the sport of Cricket in making decisions like detection of no-balls, wide-balls, leg before wicket and bouncers, with the help of a single smartphone camera. It involves the implementation of Computer Vision algorithms for object detection and motion tracking, as well as the integration of machine learning algorithms to optimize the results. Techniques like Histogram of Gradients (HOG) and Support Vector Machine (SVM) are used for object classification and recognition. Frame subtraction, minimum enclosing circle, and contour detection algorithms are optimized and used for the detection of a cricket ball. These algorithms are applied using the Open Source Python Library - OpenCV. Machine Learning techniques - Linear and Quadratic Regression are used to track and predict the motion of the ball. It also involves the use of open source Python library VPython for the visual representation of the results. The paper describes the design and structure for the approach undertaken in the system for analyzing and visualizing off-air low-quality cricket videos.


BMJ Open ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. e055525
Author(s):  
Yik-Ki Jacob Wan ◽  
Guilherme Del Fiol ◽  
Mary M McFarland ◽  
Melanie C Wright

IntroductionEarly identification of patients who may suffer from unexpected adverse events (eg, sepsis, sudden cardiac arrest) gives bedside staff valuable lead time to care for these patients appropriately. Consequently, many machine learning algorithms have been developed to predict adverse events. However, little research focuses on how these systems are implemented and how system design impacts clinicians’ decisions or patient outcomes. This protocol outlines the steps to review the designs of these tools.Methods and analysisWe will use scoping review methods to explore how tools that leverage machine learning algorithms in predicting adverse events are designed to integrate into clinical practice. We will explore the types of user interfaces deployed, what information is displayed, and how clinical workflows are supported. Electronic sources include Medline, Embase, CINAHL Complete, Cochrane Library (including CENTRAL), and IEEE Xplore from 1 January 2009 to present. We will only review primary research articles that report findings from the implementation of patient deterioration surveillance tools for hospital clinicians. The articles must also include a description of the tool’s user interface. Since our primary focus is on how the user interacts with automated tools driven by machine learning algorithms, electronic tools that do not extract data from clinical data documentation or recording systems such as an EHR or patient monitor, or otherwise require manual entry, will be excluded. Similarly, tools that do not synthesise information from more than one data variable will also be excluded. This review will be limited to English-language articles. Two reviewers will review the articles and extract the data. Findings from both researchers will be compared with minimise bias. The results will be quantified, synthesised and presented using appropriate formats.Ethics and disseminationEthics review is not required for this scoping review. Findings will be disseminated through peer-reviewed publications.


Ethiopia is the leading producer of chickpea in Africa and among the top ten most important producers of chickpea in the world. Debre Zeit Agriculture Research Center is a research center in Ethiopia which is mandated for the improvement of chickpea and other crops. Genome enabled prediction technologies trying to transform the classification of chickpea types and upgrading the existing identification paradigm.Current state of the identification of chickpea types in Ethiopia still sticks to a manual. Domain experts tried to recognize every chickpea type, the way and efficiency of identifying each chickpea types mainly depend on the skills and experience of experts in the domain area and this frequently causes error and sometimes inaccurate. Most of the classification and identification of crops researches were done outside Ethiopia; for local and emerging varieties, there is a need to design classification model that assists selection mechanisms of chickpea and even accuracy of an existing algorithm should be verified and optimized. The main aim of this study is to design chickpea type classification model using machine learning algorithm that classify chickpea types. This research work has a total of 8303 records with 8 features and 80% for training and 20% for testing were used. Data preprocessing were done to prepare the dataset for experiments. ANN, SVM and DT were used to build the model. For evaluating the performance of the model confusion matrix with Accuracy, Recall and Precision were used. The experimental results show that the best-performed algorithms were decision tree and achieve 97.5% accuracy. After the evaluation of results found in this research work, agriculture research centers and companies have benefited. The model of chickpea type classification will be applied in Debre Zeit agriculture research center in Ethiopia as a base to support the experts during chickpea type identification process. In addition it enables the expertise to save time, effort and cost with the support of the identification model. Moreover, this research can also be used as a corner stone in the area and will be referred by future researchers in the domain area.


Sign in / Sign up

Export Citation Format

Share Document