scholarly journals A Hybrid Teaching Mode Based on Machine Learning Algorithm

2020 ◽  
Vol 6 (1) ◽  
pp. 22-28
Author(s):  
Jinjin Liang ◽  
Yong Nie

Background: Hybrid teaching mode is a new trend under the Education Informatization environment, which combines the advantages of educators’ supervision offline and learners’ self-regulated learning online. Capturing learners’ learning behavior data becomes easy both from the traditional classroom and online platform. Methods: If machine learning algorithms can be applied to mine valuable information underneath those behavior data, it will provide scientific evidence and contribute to wise decision making as well as effective teaching process designing by educators. Results: This paper proposed a hybrid teaching mode utilizing machine learning algorithms, which uses clustering analysis to analyze the learner’s characteristics and introduces a support vector machine to predict future learning performance. The hybrid mode matches the predicted results to carry out the offline teaching process. Conclusion: Simulation results on about 356 students’ data on one specific course in a certain semester demonstrate that the proposed hybrid teaching mode performs very well by analyzing and predicting the learners’ performance with high accuracies.

2020 ◽  
Vol 12 (11) ◽  
pp. 1838 ◽  
Author(s):  
Zhao Zhang ◽  
Paulo Flores ◽  
C. Igathinathane ◽  
Dayakar L. Naik ◽  
Ravi Kiran ◽  
...  

The current mainstream approach of using manual measurements and visual inspections for crop lodging detection is inefficient, time-consuming, and subjective. An innovative method for wheat lodging detection that can overcome or alleviate these shortcomings would be welcomed. This study proposed a systematic approach for wheat lodging detection in research plots (372 experimental plots), which consisted of using unmanned aerial systems (UAS) for aerial imagery acquisition, manual field evaluation, and machine learning algorithms to detect the occurrence or not of lodging. UAS imagery was collected on three different dates (23 and 30 July 2019, and 8 August 2019) after lodging occurred. Traditional machine learning and deep learning were evaluated and compared in this study in terms of classification accuracy and standard deviation. For traditional machine learning, five types of features (i.e. gray level co-occurrence matrix, local binary pattern, Gabor, intensity, and Hu-moment) were extracted and fed into three traditional machine learning algorithms (i.e., random forest (RF), neural network, and support vector machine) for detecting lodged plots. For the datasets on each imagery collection date, the accuracies of the three algorithms were not significantly different from each other. For any of the three algorithms, accuracies on the first and last date datasets had the lowest and highest values, respectively. Incorporating standard deviation as a measurement of performance robustness, RF was determined as the most satisfactory. Regarding deep learning, three different convolutional neural networks (simple convolutional neural network, VGG-16, and GoogLeNet) were tested. For any of the single date datasets, GoogLeNet consistently had superior performance over the other two methods. Further comparisons between RF and GoogLeNet demonstrated that the detection accuracies of the two methods were not significantly different from each other (p > 0.05); hence, the choice of any of the two would not affect the final detection accuracies. However, considering the fact that the average accuracy of GoogLeNet (93%) was larger than RF (91%), it was recommended to use GoogLeNet for wheat lodging detection. This research demonstrated that UAS RGB imagery, coupled with the GoogLeNet machine learning algorithm, can be a novel, reliable, objective, simple, low-cost, and effective (accuracy > 90%) tool for wheat lodging detection.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 844
Author(s):  
Ting-Zhao Chen ◽  
Yan-Yan Chen ◽  
Jian-Hui Lai

With expansion of city scale, the issue of public transport systems will become prominent. For single-swipe buses, the traditional method of obtaining section passenger flow is to rely on surveillance video identification or manual investigation. This paper adopts a new method: collecting wireless signals from mobile terminals inside and outside the bus by installing six Wi-Fi probes in the bus, and use machine learning algorithms to estimate passenger flow of the bus. Five features of signals were selected, and then the three machine learning algorithms of Random Forest, K-Nearest Neighbor, and Support Vector Machines were used to learn the data laws of signal features. Because the signal strength was affected by the complexity of the environment, a strain function was proposed, which varied with the degree of congestion in the bus. Finally, the error between the average of estimation result and the manual survey was 0.1338. Therefore, the method proposed is suitable for the passenger flow identification of single-swiping buses in small and medium-sized cities, which improves the operational efficiency of buses and reduces the waiting pressure of passengers during the morning and evening rush hours in the future.


Author(s):  
Ravita Chahar ◽  
Deepinder Kaur

In this paper machine learning algorithms have been discussed and analyzed. It has been discussed considering computational aspects in different domains. These algorithms have the capability of building mathematical and analytical model. These models may be helpful in the decision-making process. This paper elaborates the computational analysis in three different ways. The background and analytical aspect have been presented with the learning application in the first phase. In the second phase detail literature has been explored along with the pros and cons of the applied techniques in different domains. Based on the literatures, gap identification and the limitations have been discussed and highlighted in the third phase. Finally, computational analysis has been presented along with the machine learning results in terms of accuracy. The results mainly focus on the exploratory data analysis, domain applicability and the predictive problems. Our systematic analysis shows that the applicability of machine learning is wide and the results may be improved based on these algorithms. It is also inferred from the literature analysis that at the applicability of machine learning algorithm has the capability in the performance improvement. The main methods discussed here are classification and regression trees (CART), logistic regression, naïve Bayes (NB), k-nearest neighbors (KNN), support vector machine (SVM) and decision tree (DT). The domain covered mainly are disease detection, business intelligence, industry automation and sentiment analysis.


2021 ◽  
Author(s):  
Aayushi Rathore ◽  
Anu Saini ◽  
Navjot Kaur ◽  
Aparna Singh ◽  
Ojasvi Dutta ◽  
...  

ABSTRACTSepsis is a severe infectious disease with high mortality, and it occurs when chemicals released in the bloodstream to fight an infection trigger inflammation throughout the body and it can cause a cascade of changes that damage multiple organ systems, leading them to fail, even resulting in death. In order to reduce the possibility of sepsis or infection antiseptics are used and process is known as antisepsis. Antiseptic peptides (ASPs) show properties similar to antigram-negative peptides, antigram-positive peptides and many more. Machine learning algorithms are useful in screening and identification of therapeutic peptides and thus provide initial filters or built confidence before using time consuming and laborious experimental approaches. In this study, various machine learning algorithms like Support Vector Machine (SVM), Random Forest (RF), K-Nearest Neighbour (KNN) and Logistic Regression (LR) were evaluated for prediction of ASPs. Moreover, the characteristics physicochemical features of ASPs were also explored to use them in machine learning. Both manual and automatic feature selection methodology was employed to achieve best performance of machine learning algorithms. A 5-fold cross validation and independent data set validation proved RF as the best model for prediction of ASPs. Our RF model showed an accuracy of 97%, Matthew’s Correlation Coefficient (MCC) of 0.93, which are indication of a robust and good model. To our knowledge this is the first attempt to build a machine learning classifier for prediction of ASPs.


Author(s):  
Stuti Pandey ◽  
Abhay Kumar Agarwal

Cardiovascular disease prediction is a research field of healthcare which depends on a large volume of data for making effective and accurate predictions. These predictions can be more effective and accurate when used with machine learning algorithms because it can disclose all the concealed facts which are helpful in making decisions. The processing capabilities of machine learning algorithms are also very fast which is almost infeasible for human beings. Therefore, the work presented in this research focuses on identifying the best machine learning algorithm by comparing their performances for predicting cardiovascular diseases in a reasonable time. The machine learning algorithms which have been used in the presented work are naïve Bayes, support vector machine, k-nearest neighbors, and random forest. The dataset which has been utilized for this comparison is taken from the University of California, Irvine (UCI) machine learning repository named “Heart Disease Data Set.”


2013 ◽  
Vol 10 (2) ◽  
pp. 1376-1383
Author(s):  
Dr.Vijay Pal Dhaka ◽  
Swati Agrawal

Maintainability is an important quality attribute and a difficult concept as it involves a number of measurements. Quality estimation means estimating maintainability of software. Maintainability is a set of attribute that bear on the effort needed to make specified modification. The main goal of this paper is to propose use of few machine learning algorithms with an objective to predict software maintainability and evaluate them. The propose models are Gaussian process regression networks (GPRN), probably approximately correct learning (PAC), Genetic algorithm (GA). This paper predicts the maintenance effort. The QUES (Quality evaluation system) dataset are used in this study. The QUES datasets contains 71 classes. To measure the maintainability, number of “CHANGE” is observed over a period of few years. We can define CHANGE as the number of lines of code which were added, deleted or modified during few year maintenance periods. After this study these machine learning algorithm was compared with few models such as GRNN (General regression neural network) model, RT (Regression tree), MARS (Multiple adaptive regression splines), SVM (Support vector machine), MLR (Multiple linear regression) models. Based on experiments, it was found that GPRN can be predicting the maintainability more accurately and precisely than prevailing models. We also include object oriented software metric to measure the software maintainability. The use of machine learning algorithms to establish the relationship between metrics and maintainability would be much better approach as these are based on quantity as well as quality. 


Author(s):  
Shahadat Uddin ◽  
Arif Khan ◽  
Md Ekramul Hossain ◽  
Mohammad Ali Moni

Abstract Background Supervised machine learning algorithms have been a dominant method in the data mining field. Disease prediction using health data has recently shown a potential application area for these methods. This study aims to identify the key trends among different types of supervised machine learning algorithms, and their performance and usage for disease risk prediction. Methods In this study, extensive research efforts were made to identify those studies that applied more than one supervised machine learning algorithm on single disease prediction. Two databases (i.e., Scopus and PubMed) were searched for different types of search items. Thus, we selected 48 articles in total for the comparison among variants supervised machine learning algorithms for disease prediction. Results We found that the Support Vector Machine (SVM) algorithm is applied most frequently (in 29 studies) followed by the Naïve Bayes algorithm (in 23 studies). However, the Random Forest (RF) algorithm showed superior accuracy comparatively. Of the 17 studies where it was applied, RF showed the highest accuracy in 9 of them, i.e., 53%. This was followed by SVM which topped in 41% of the studies it was considered. Conclusion This study provides a wide overview of the relative performance of different variants of supervised machine learning algorithms for disease prediction. This important information of relative performance can be used to aid researchers in the selection of an appropriate supervised machine learning algorithm for their studies.


2017 ◽  
Author(s):  
Woo-Young Ahn ◽  
Paul Hendricks ◽  
Nathaniel Haines

AbstractThe easyml (easy machine learning) package lowers the barrier to entry to machine learning and is ideal for undergraduate/graduate students, and practitioners who want to quickly apply machine learning algorithms to their research without having to worry about the best practices of implementing each algorithm. The package provides standardized recipes for regression and classification algorithms in R and Python and implements them in a functional, modular, and extensible framework. This package currently implements recipes for several common machine learning algorithms (e.g., penalized linear models, random forests, and support vector machines) and provides a unified interface to each one. Importantly, users can run and evaluate each machine learning algorithm with a single line of coding. Each recipe is robust, implements best practices specific to each algorithm, and generates a report with details about the model, its performance, as well as journal-quality visualizations. The package’s functional, modular, and extensible framework also allows researchers and more advanced users to easily implement new recipes for other algorithms.


Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 1057 ◽  
Author(s):  
Edna Dias Canedo ◽  
Bruno Cordeiro Mendes

The correct classification of requirements has become an essential task within software engineering. This study shows a comparison among the text feature extraction techniques, and machine learning algorithms to the problem of requirements engineer classification to answer the two major questions “Which works best (Bag of Words (BoW) vs. Term Frequency–Inverse Document Frequency (TF-IDF) vs. Chi Squared (CHI2)) for classifying Software Requirements into Functional Requirements (FR) and Non-Functional Requirements (NF), and the sub-classes of Non-Functional Requirements?” and “Which Machine Learning Algorithm provides the best performance for the requirements classification task?”. The data used to perform the research was the PROMISE_exp, a recently made dataset that expands the already known PROMISE repository, a repository that contains labeled software requirements. All the documents from the database were cleaned with a set of normalization steps and the two feature extractions, and feature selection techniques used were BoW, TF-IDF and CHI2 respectively. The algorithms used for classification were Logist Regression (LR), Support Vector Machine (SVM), Multinomial Naive Bayes (MNB) and k-Nearest Neighbors (kNN). The novelty of our work is the data used to perform the experiment, the details of the steps used to reproduce the classification, and the comparison between BoW, TF-IDF and CHI2 for this repository not having been covered by other studies. This work will serve as a reference for the software engineering community and will help other researchers to understand the requirement classification process. We noticed that the use of TF-IDF followed by the use of LR had a better classification result to differentiate requirements, with an F-measure of 0.91 in binary classification (tying with SVM in that case), 0.74 in NF classification and 0.78 in general classification. As future work we intend to compare more algorithms and new forms to improve the precision of our models.


2018 ◽  
Vol 7 (4.15) ◽  
pp. 400 ◽  
Author(s):  
Thuy Nguyen Thi Thu ◽  
Vuong Dang Xuan

The exchange rate of each money pair can be predicted by using machine learning algorithm during classification process. With the help of supervised machine learning model, the predicted uptrend or downtrend of FoRex rate might help traders to have right decision on FoRex transactions. The installation of machine learning algorithms in the FoRex trading online market can automatically make the transactions of buying/selling. All the transactions in the experiment are performed by using scripts added-on in transaction application. The capital, profits results of use support vector machine (SVM) models are higher than the normal one (without use of SVM). 


Sign in / Sign up

Export Citation Format

Share Document