scholarly journals Motivation Classification and Grade Prediction for MOOCs Learners

2016 ◽  
Vol 2016 ◽  
pp. 1-7 ◽  
Author(s):  
Bin Xu ◽  
Dan Yang

While MOOCs offer educational data on a new scale, many educators find great potential of the big data including detailed activity records of every learner. A learner’s behavior such as if a learner will drop out from the course can be predicted. How to provide an effective, economical, and scalable method to detect cheating on tests such as surrogate exam-taker is a challenging problem. In this paper, we present a grade predicting method that uses student activity features to predict whether a learner may get a certification if he/she takes a test. The method consists of two-step classifications: motivation classification (MC) and grade classification (GC). The MC divides all learners into three groups including certification earning, video watching, and course sampling. The GC then predicts a certification earning learner may or may not obtain a certification. Our experiment shows that the proposed method can fit the classification model at a fine scale and it is possible to find a surrogate exam-taker.

2019 ◽  
Vol 11 (1) ◽  
pp. 61
Author(s):  
Klaus Müller

The perhaps most challenging problem for a panentheistic paradigm in Christian god-talk consists in integrating the trait of personhood in the monistic horizon of this approach. A very helpful way to this goal seems to be the concept of imagination. Its logic of an “as if” represents a modified variation of Kant`s idea of the postulates of reason. Reflections of Jürgen Werbick, Douglas Headley, and Volker Gerhardt substantiate the philosophical and theological capabilities of this solution which also include a sensibility for the ontological commitments included in the panentheistic approach.


Molecules ◽  
2020 ◽  
Vol 26 (1) ◽  
pp. 20
Author(s):  
Reynaldo Villarreal-González ◽  
Antonio J. Acosta-Hoyos ◽  
Jaime A. Garzon-Ochoa ◽  
Nataly J. Galán-Freyle ◽  
Paola Amar-Sepúlveda ◽  
...  

Real-time reverse transcription (RT) PCR is the gold standard for detecting Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), owing to its sensitivity and specificity, thereby meeting the demand for the rising number of cases. The scarcity of trained molecular biologists for analyzing PCR results makes data verification a challenge. Artificial intelligence (AI) was designed to ease verification, by detecting atypical profiles in PCR curves caused by contamination or artifacts. Four classes of simulated real-time RT-PCR curves were generated, namely, positive, early, no, and abnormal amplifications. Machine learning (ML) models were generated and tested using small amounts of data from each class. The best model was used for classifying the big data obtained by the Virology Laboratory of Simon Bolivar University from real-time RT-PCR curves for SARS-CoV-2, and the model was retrained and implemented in a software that correlated patient data with test and AI diagnoses. The best strategy for AI included a binary classification model, which was generated from simulated data, where data analyzed by the first model were classified as either positive or negative and abnormal. To differentiate between negative and abnormal, the data were reevaluated using the second model. In the first model, the data required preanalysis through a combination of prepossessing. The early amplification class was eliminated from the models because the numbers of cases in big data was negligible. ML models can be created from simulated data using minimum available information. During analysis, changes or variations can be incorporated by generating simulated data, avoiding the incorporation of large amounts of experimental data encompassing all possible changes. For diagnosing SARS-CoV-2, this type of AI is critical for optimizing PCR tests because it enables rapid diagnosis and reduces false positives. Our method can also be used for other types of molecular analyses.


2019 ◽  
Vol 2 (2) ◽  
pp. 43
Author(s):  
Lalu Mutawalli ◽  
Mohammad Taufan Asri Zaen ◽  
Wire Bagye

In the era of technological disruption of mass communication, social media became a reference in absorbing public opinion. The digitalization of data is very rapidly produced by social media users because it is an attempt to represent the feelings of the audience. Data production in question is the user posts the status and comments on social media. Data production by the public in social media raises a very large set of data or can be referred to as big data. Big data is a collection of data sets in very large numbers, complex, has a relatively fast appearance time, so that makes it difficult to handle. Analysis of big data with data mining methods to get knowledge patterns in it. This study analyzes the sentiments of netizens on Twitter social media on Mr. Wiranto stabbing case. The results of the sentiment analysis showed 41% gave positive comments, 29% commented neutrally, and 29% commented negatively on events. Besides, modeling of the data is carried out using a support vector machine algorithm to create a system capable of classifying positive, neutral, and negative connotations. The classification model that has been made is then tested using the confusion matrix technique with each result is a precision value of 83%, a recall value of 80%, and finally, as much as 80% obtained in testing the accuracy.


2021 ◽  
Vol 2136 (1) ◽  
pp. 012057
Author(s):  
Han Zhou

Abstract In the context of the comprehensive popularization of network technical services and database construction system, more and more data are used by enterprises or individuals. It is difficult for the existing technology to meet the technical analysis requirements of the development of the era of big data. Therefore, in the development of practice, we should continue to explore new technologies and methods to reasonably use big data. Therefore, on the basis of understanding the current big data technology and its system operation status, this paper designs relevant algorithms according to the big data classification model, and verifies the effectiveness of the analysis model algorithm based on practice.


Author(s):  
Se-Hoon Jung ◽  
Jong-Chan Kim

In the generation and analysis of Big Data following the development of various information devices, the old data processing and management techniques reveal their hardware and software limitations. Their hardware limitations can be overcome by the CPU and GPU advancements, but their software limitations depend on the advancement of hardware. This study thus sets out to address the increasing analysis costs of dense Big Data from a software perspective instead of depending on hardware. An altered [Formula: see text]-means algorithm was proposed with ideal points to address the analysis costs issue of dense Big Data. The proposed algorithm would find an optimal cluster by applying Principal Component Analysis (PCA) in the multi-dimensional structure of dense Big Data and categorize data with the predicted ideal points as the central points of initial clusters. Its clustering validity index and [Formula: see text]-measure results were compared with those of existing algorithms to check its excellence, and it had similar results to them. It was also compared and assessed with some data classification techniques investigated in previous studies and we found that it made a performance improvement of about 3–6% in the analysis costs.


Nowadays, In Bangladesh, the dropout rate at post-graduation level or incompletion of the post-graduation degree is considered as a serious problem in the education sector. This work can be used to support for identifying the specific individuals as well as the institutional factors which may next lead to the enrollment or drop out at the post-graduation degree. The real dataset is used to accomplish this work. Here, seven classification algorithms namely Naïve Bayes, Multilayer Perceptron, Logistic, Locally Weighted Learning (LWL), Random Forest, Random Tree, and Part are applied in this context. A confusion matrix is calculated for each classification model. Then, we computed all the seven performance evaluation metrics (accuracy, sensitivity, precision, specificity, F1 score, FPR, and FNR). Each classifier's performances are analyzed and measured from the computed performance evaluation metrics. Naïve Bayes, LWL, and Part classifier perform better than all other working classifiers attaining 86.36% accuracy and on the contrary, Random Tree classifier performs worst achieving 74.24% accuracy. After further analyzing of the result based on performance evaluation metrics, it is observed that LWL classifier performed best in this context among all the classifiers.


2019 ◽  
Author(s):  
Mark Rademaker ◽  
Laurens Hogeweg ◽  
Rutger Vos

AbstractKnowledge of global biodiversity remains limited by geographic and taxonomic sampling biases. The scarcity of species data restricts our understanding of the underlying environmental factors shaping distributions, and the ability to draw comparisons among species. Species distribution models (SDMs) were developed in the early 2000s to address this issue. Although SDMs based on single layered Neural Networks have been experimented with in the past, these performed poorly. However, the past two decades have seen a strong increase in the use of Deep Learning (DL) approaches, such as Deep Neural Networks (DNNs). Despite the large improvement in predictive capacity DNNs provide over shallow networks, to our knowledge these have not yet been applied to SDM. The aim of this research was to provide a proof of concept of a DL-SDM1. We used a pre-existing dataset of the world’s ungulates and abiotic environmental predictors that had recently been used in MaxEnt SDM, to allow for a direct comparison of performance between both methods. Our DL-SDM consisted of a binary classification DNN containing 4 hidden layers and drop-out regularization between each layer. Performance of the DL-SDM was similar to MaxEnt for species with relatively large sample sizes and worse for species with relatively low sample sizes. Increasing the number of occurrences further improved DL-SDM performance for species that already had relatively high sample sizes. We then tried to further improve performance by altering the sampling procedure of negative instances and increasing the number of environmental predictors, including species interactions. This led to a large increase in model performance across the range of sample sizes in the species datasets. We conclude that DL-SDMs provide a suitable alternative to traditional SDMs such as MaxEnt and have the advantage of being both able to directly include species interactions, as well as being able to handle correlated input features. Further improvements to the model would include increasing its scalability by turning it into a multi-classification model, as well as developing a more user friendly DL-SDM Python package.


Sign in / Sign up

Export Citation Format

Share Document