Recommendation of sustainable economic learning course based on text vector model and support vector machine

2020 ◽  
pp. 1-11
Author(s):  
Xiangfei Ma

The sustainable economic learning course recommendation can quickly find the knowledge information that the user really needs from the massive information space and realize the personalized recommendation to the user. However, the occurrence of trust attacks seriously affects the normal recommendation function of the recommendation system, resulting in its failure to provide users with reliable and reliable recommendation results. In order to solve the vulnerability of the recommendation system to the support attack, based on text vector model and support vector machine, this paper makes a comprehensive analysis of the current research status of the robust recommendation technology. Moreover, based on the idea of suspicious user metrics, this paper has conducts in-depth research on how to design highly robust recommendation algorithms, and constructs a highly reliable sustainable economic learning course recommendation model. In addition to this, this research tests the performance of the system from two perspectives of course recommendation satisfaction and system retrieval accuracy. The experiment proves that the model constructed in this paper performs well in the recommendation of sustainable economic learning courses.

Author(s):  
Ammar Alnahhas ◽  
Bassel Alkhatib

As the data on the online social networks is getting larger, it is important to build personalized recommendation systems that recommend suitable content to users, there has been much research in this field that uses conceptual representations of text to match user models with best content. This article presents a novel method to build a user model that depends on conceptual representation of text by using ConceptNet concepts that exceed the named entities to include the common-sense meaning of words and phrases. The model includes the contextual information of concepts as well, the authors also show a novel method to exploit the semantic relations of the knowledge base to extend user models, the experiment shows that the proposed model and associated recommendation algorithms outperform all previous methods as a detailed comparison shows in this article.


2020 ◽  
pp. 1-11
Author(s):  
Liu Lin

The difficulty of knowledge point recommendation based on the learning diagnosis model lies in how to perform feature recognition and selection of recommended knowledge points. At present, the recommendation system has certain problems in the accuracy of recommended knowledge points. Based on this, this study mainly studies the personalized problem recommendation of middle school students in the field of education. Moreover, this study takes the answer records of students’ exercises as data, and combines the characteristics of the field of education to propose an exercise recommendation algorithm based on hidden knowledge points and an exercise recommendation method based on the decomposition of student exercise weight matrix. In addition, in order to verify the effectiveness of this research algorithm, this paper selects the accuracy rate and recall rate as evaluation indicators to analyze the recommendation results of this algorithm and the current more advanced CF algorithm, and the statistical experiment results are drawn into charts. The research results show that the method proposed in this paper has certain advantages and can be used as one of the subsystems of the learning system.


2017 ◽  
Vol 34 (8) ◽  
pp. 1749-1761 ◽  
Author(s):  
Nan Li ◽  
Ming Wei ◽  
Yongjiang Yu ◽  
Wengang Zhang

AbstractWind retrieval algorithms are required for Doppler weather radars. In this article, a new wind retrieval algorithm of single-Doppler radar with a support vector machine (SVM) is analyzed and compared with the original algorithm with the least squares technique. Through an analysis of coefficient matrices of equations corresponding to the optimization problems for the two algorithms, the new algorithm, which contains a proper penalization parameter, is found to effectively reduce the condition numbers of the matrices and thus has the ability to acquire accurate results, and the smaller the analysis volume is, the smaller the condition number of the matrix. This characteristic makes the new algorithm suitable to retrieve mesoscale and small-scale and high-resolution wind fields. Afterward, the two algorithms are applied to retrieval experiments to implement a comparison and a discussion. The results show that the penalization parameter cannot be too small, otherwise it may cause a large condition number; it cannot be too large either, otherwise it may change the properties of equations, leading to retrieved wind direction along the radial direction. Compared with the original algorithm, the new algorithm has definite superiority with the appropriate penalization parameters for small analysis volumes. When the suggested small analysis volume dimensions and penalization parameter values are adopted, the retrieval accuracy can be improved by 10 times more than the traditional method. As a result, the new algorithm has the capability to analyze the dynamical structures of severe weather, which needs high-resolution retrieval, and the potential for quantitative applications such as the assimilation in numerical models, but the retrieval accuracy needs to be further improved in the future.


2012 ◽  
Vol 268-270 ◽  
pp. 1844-1848
Author(s):  
Mu Hee Song

Due to the distribution of personal computers and the internet, E-mail has become one of the most widely used communicative means. However, a massive amount of spam mail is polluting mailboxes everyday, taking advantage of the ability to send mail to any number of random people through the internet. In this paper we will introduce an efficient method of classifying E-mails using the SVM(Support Vector Machine) learning algorithm, which is recently showing high performance in the field of classifying documents. The disposition of the words inside the E-mail documents are extracted, and the performance of classification is compared and examined through the learning based on the change of DF value which occurs to reduce the disposition space in the learning level. To assess the performance of the SVM, the SVM is compared to the Naïve Bayes classifier (which uses probability methods) and a vector model classifier in order to verify that the method of using the learning algorithm of SVM shows better performance.


2020 ◽  
Vol 4 (1) ◽  
pp. 18
Author(s):  
Sozan Abdulla Mahmood ◽  
Qani Qabil Qasim

With the rapid evolution of the internet, using social media networks such as Twitter, Facebook, and Tumblr, is becoming so common that they have made a great impact on every aspect of human life. Twitter is one of the most popular micro-blogging social media that allow people to share their emotions in short text about variety of topics such as company’s products, people, politics, and services. Analyzing sentiment could be possible as emotions and reviews on different topics are shared every second, which makes social media to become a useful source of information in different fields such as business, politics, applications, and services. Twitter Application Programming Interface (Twitter-API), which is an interface between developers and Twitter, allows them to search for tweets based on the desired keyword using some secret keys and tokens. In this work, Twitter-API used to download the most recent tweets about four keywords, namely, (Trump, Bitcoin, IoT, and Toyota) with a different number of tweets. “Vader” that is a lexicon rule-based method used to categorize downloaded tweets into “Positive” and “Negative” based on their polarity, then the tweets were protected in Mongo database for the next processes. After pre-processing, the hold-out technique was used to split each dataset to 80% as “training-set” and rest 20% “testing-set.” After that, a deep learning-based Document to Vector model was used for feature extraction. To perform the classification task, Radial Bias Function kernel-based support vector machine (SVM) has been used. The accuracy of (RBF-SVM) mainly depends on the value of hyperplane “Soft Margin” penalty “C” and γ “gamma” parameters. The main goal of this work is to select best values for those parameters in order to improve the accuracy of RBF-SVM classifier. The objective of this study is to show the impacts of using four meta-heuristic optimizer algorithms, namely, particle swarm optimizer (PSO), modified PSO (MPSO), grey wolf optimizer (GWO), and hybrid of PSO-GWO in improving SVM classification accuracy by selecting the best values for those parameters. To the best of our knowledge, hybrid PSO-GWO has never been used in SVM optimization. The results show that these optimizers have a significant impact on increasing SVM accuracy. The best accuracy of the model with traditional SVM was 87.885%. After optimization, the highest accuracy obtained with GWO is 91.053% while PSO, hybrid PSO-GWO, and MPSO best accuracies are 90.736%, 90.657%, and 90.557%, respectively.


Recommendation systems are subdivision of Refine Data that request to anticipate ranking or liking a user would give to an item. Recommended systems produce user customized exhortations for product or service. Recommended systems are used in different services like Google Search Engine, YouTube, Gmail and also Product recommendation service on any E-Commerce website. These systems usually depends on content based approach. in this paper, we develop these type recommended systems by using several algorithms like K-Nearest neighbors(KNN), Support-Vector Machine(SVM), Logistic Regression(LR), MultinomialNB(MNB),and Multi-layer Perception(MLP). These will predict nearest categories from the News Category Data, among these categories we will recommend the most common sentence to a user and we analyze the performance metrics. This approach is tested on News Category Data set. This data set having more or less 200k Headlines of News and 41 classes, collected from the Huff post from the year of 2012-2018.


With the explosion of internet information, people feel helpless and difficult to choose in the face of massive information. However, the traditional method to organize a huge set of original documents is not only time-consuming and laborious, but also not ideal. The automatic text classification can liberate users from the tedious document processing work, recognize and distinguish different document contents more conveniently, make a large number of complicated documents institutionalized and systematized, and greatly improve the utilization rate of information. This paper adopts termed-based model to extract the features in web semantics to represent document. The extracted web semantics features are used to learn a reduced support vector machine. The experimental results show that the proposed method can correctly identify most of the writing styles.


Sign in / Sign up

Export Citation Format

Share Document