scholarly journals Predicting Sentiment Polarity of Microblogs using an LSTM – CNN Deep Learning Model

In this paper we propose a novel supervised machine learning model to predict the polarity of sentiments expressed in microblogs. The proposed model has a stacked neural network structure consisting of Long Short Term Memory (LSTM) and Convolutional Neural Network (CNN) layers. In order to capture the long-term dependencies of sentiments in the text ordering of a microblog, the proposed model employs an LSTM layer. The encodings produced by the LSTM layer are then fed to a CNN layer, which generates localized patterns of higher accuracy. These patterns are capable of capturing both local and global long-term dependences in the text of the microblogs. It was observed that the proposed model performs better and gives improved prediction accuracy when compared to semantic, machine learning and deep neural network approaches such as SVM, CNN, LSTM, CNN-LSTM, etc. This paper utilizes the benchmark Stanford Large Movie Review dataset to show the significance of the new approach. The prediction accuracy of the proposed approach is comparable to other state-of-art approaches.

Data & Policy ◽  
2021 ◽  
Vol 3 ◽  
Author(s):  
Munisamy Gopinath ◽  
Feras A. Batarseh ◽  
Jayson Beckman ◽  
Ajay Kulkarni ◽  
Sei Jeong

Abstract Focusing on seven major agricultural commodities with a long history of trade, this study employs data-driven analytics to decipher patterns of trade, namely using supervised machine learning (ML), as well as neural networks. The supervised ML and neural network techniques are trained on data until 2010 and 2014, respectively. Results show the high relevance of ML models to forecasting trade patterns in near- and long-term relative to traditional approaches, which are often subjective assessments or time-series projections. While supervised ML techniques quantified key economic factors underlying agricultural trade flows, neural network approaches provide better fits over the long term.


2019 ◽  
Vol 8 (3) ◽  
pp. 7809-7817

Creating a fast domain independent ontology through knowledge acquisition is a key problem to be addressed in the domain of knowledge engineering. Updating and validation is impossible without the intervention of domain experts, which is an expensive and tedious process. Thereby, an automatic system to model the ontology has become essential. This manuscript presents a machine learning model based on heterogeneous data from multiple domains including agriculture, health care, food and banking, etc. The proposed model creates a complete domain independent process that helps in populating the ontology automatically by extracting the text from multiple sources by applying natural language processing and various techniques of data extraction. The ontology instances are classified based on the domain. A Jaccord Relationship extraction process and the Neural Network Approval for Automated Theory is used for retrieval of data, automated indexing, mapping and knowledge discovery and rule generation. The results and solutions show the proposed model can automatically and efficiently construct automated Ontology


Author(s):  
Dhilsath Fathima.M ◽  
S. Justin Samuel ◽  
R. Hari Haran

Aim: This proposed work is used to develop an improved and robust machine learning model for predicting Myocardial Infarction (MI) could have substantial clinical impact. Objectives: This paper explains how to build machine learning based computer-aided analysis system for an early and accurate prediction of Myocardial Infarction (MI) which utilizes framingham heart study dataset for validation and evaluation. This proposed computer-aided analysis model will support medical professionals to predict myocardial infarction proficiently. Methods: The proposed model utilize the mean imputation to remove the missing values from the data set, then applied principal component analysis to extract the optimal features from the data set to enhance the performance of the classifiers. After PCA, the reduced features are partitioned into training dataset and testing dataset where 70% of the training dataset are given as an input to the four well-liked classifiers as support vector machine, k-nearest neighbor, logistic regression and decision tree to train the classifiers and 30% of test dataset is used to evaluate an output of machine learning model using performance metrics as confusion matrix, classifier accuracy, precision, sensitivity, F1-score, AUC-ROC curve. Results: Output of the classifiers are evaluated using performance measures and we observed that logistic regression provides high accuracy than K-NN, SVM, decision tree classifiers and PCA performs sound as a good feature extraction method to enhance the performance of proposed model. From these analyses, we conclude that logistic regression having good mean accuracy level and standard deviation accuracy compared with the other three algorithms. AUC-ROC curve of the proposed classifiers is analyzed from the output figure.4, figure.5 that logistic regression exhibits good AUC-ROC score, i.e. around 70% compared to k-NN and decision tree algorithm. Conclusion: From the result analysis, we infer that this proposed machine learning model will act as an optimal decision making system to predict the acute myocardial infarction at an early stage than an existing machine learning based prediction models and it is capable to predict the presence of an acute myocardial Infarction with human using the heart disease risk factors, in order to decide when to start lifestyle modification and medical treatment to prevent the heart disease.


10.2196/18142 ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. e18142
Author(s):  
Ramin Mohammadi ◽  
Mursal Atif ◽  
Amanda Jayne Centi ◽  
Stephen Agboola ◽  
Kamal Jethwani ◽  
...  

Background It is well established that lack of physical activity is detrimental to the overall health of an individual. Modern-day activity trackers enable individuals to monitor their daily activities to meet and maintain targets. This is expected to promote activity encouraging behavior, but the benefits of activity trackers attenuate over time due to waning adherence. One of the key approaches to improving adherence to goals is to motivate individuals to improve on their historic performance metrics. Objective The aim of this work was to build a machine learning model to predict an achievable weekly activity target by considering (1) patterns in the user’s activity tracker data in the previous week and (2) behavior and environment characteristics. By setting realistic goals, ones that are neither too easy nor too difficult to achieve, activity tracker users can be encouraged to continue to meet these goals, and at the same time, to find utility in their activity tracker. Methods We built a neural network model that prescribes a weekly activity target for an individual that can be realistically achieved. The inputs to the model were user-specific personal, social, and environmental factors, daily step count from the previous 7 days, and an entropy measure that characterized the pattern of daily step count. Data for training and evaluating the machine learning model were collected over a duration of 9 weeks. Results Of 30 individuals who were enrolled, data from 20 participants were used. The model predicted target daily count with a mean absolute error of 1545 (95% CI 1383-1706) steps for an 8-week period. Conclusions Artificial intelligence applied to physical activity data combined with behavioral data can be used to set personalized goals in accordance with the individual’s level of activity and thereby improve adherence to a fitness tracker; this could be used to increase engagement with activity trackers. A follow-up prospective study is ongoing to determine the performance of the engagement algorithm.


2020 ◽  
Author(s):  
Mingjian Wen ◽  
Samuel Blau ◽  
Evan Spotte-Smith ◽  
Shyam Dwaraknath ◽  
Kristin Persson

<div><div><div><p>A broad collection of technologies, including e.g. drug metabolism, biofuel combustion, photochemical decontamination of water, and interfacial passivation in energy production/storage systems rely on chemical processes that involve bond-breaking molecular reactions. In this context, a fundamental thermodynamic property of interest is the bond dissociation energy (BDE) which measures the strength of a chemical bond. Fast and accurate prediction of BDEs for arbitrary molecules would lay the groundwork for data-driven projections of complex reaction cascades and hence a deeper understanding of these critical chemical processes and, ultimately, how to reverse design them. In this paper, we propose a chemically inspired graph neural network machine learning model, BonDNet, for the rapid and accurate prediction of BDEs. BonDNet maps the difference between the molecular representations of the reactants and products to the reaction BDE. Because of the use of this difference representation and the introduction of global features, including molecular charge, it is the first machine learning model capable of predicting both homolytic and heterolytic BDEs for molecules of any charge. To test the model, we have constructed a dataset of both homolytic and heterolytic BDEs for neutral and charged (1 and +1) molecules. BonDNet achieves a mean absolute error (MAE) of 0.022 eV for unseen test data, significantly below chemical accuracy (0.043 eV). Besides the ability to handle complex bond dissociation reactions that no previous model could con- sider, BonDNet distinguishes itself even in only predicting homolytic BDEs for neutral molecules; it achieves an MAE of 0.020 eV on the PubChem BDE dataset, a 20% improvement over the previous best performing model. We gain additional insight into the model’s predictions by analyzing the patterns in the features representing the molecules and the bond dissociation reactions, which are qualitatively consistent with chemical rules and intuition. BonDNet is just one application of our general approach to representing and learning chemical reactivity, and it could be easily extended to the prediction of other reaction properties in the future.</p></div></div></div>


Author(s):  
Surenthiran Krishnan ◽  
Pritheega Magalingam ◽  
Roslina Ibrahim

<span>This paper proposes a new hybrid deep learning model for heart disease prediction using recurrent neural network (RNN) with the combination of multiple gated recurrent units (GRU), long short-term memory (LSTM) and Adam optimizer. This proposed model resulted in an outstanding accuracy of 98.6876% which is the highest in the existing model of RNN. The model was developed in Python 3.7 by integrating RNN in multiple GRU that operates in Keras and Tensorflow as the backend for deep learning process, supported by various Python libraries. The recent existing models using RNN have reached an accuracy of 98.23% and deep neural network (DNN) has reached 98.5%. The common drawbacks of the existing models are low accuracy due to the complex build-up of the neural network, high number of neurons with redundancy in the neural network model and imbalance datasets of Cleveland. Experiments were conducted with various customized model, where results showed that the proposed model using RNN and multiple GRU with synthetic minority oversampling technique (SMOTe) has reached the best performance level. This is the highest accuracy result for RNN using Cleveland datasets and much promising for making an early heart disease prediction for the patients.</span>


Author(s):  
Joke Daems ◽  
Orphée De Clercq ◽  
Lieve Macken

Whereas post-edited texts have been shown to be either of comparable quality to human translations or better, one study shows that people still seem to prefer human-translated texts. The idea of texts being inherently different despite being of high quality is not new. Translated texts, for example, are also different from original texts, a phenomenon referred to as ‘Translationese’. Research into Translationese has shown that, whereas humans cannot distinguish between translated and original text, computers have been trained to detect Translationese successfully. It remains to be seen whether the same can be done for what we call Post-editese. We first establish whether humans are capable of distinguishing post-edited texts from human translations, and then establish whether it is possible to build a supervised machine-learning model that can distinguish between translated and post-edited text.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Max Schneckenburger ◽  
Sven Höfler ◽  
Luis Garcia ◽  
Rui Almeida ◽  
Rainer Börret

Abstract Robot polishing is increasingly being used in the production of high-end glass workpieces such as astronomy mirrors, lithography lenses, laser gyroscopes or high-precision coordinate measuring machines. The quality of optical components such as lenses or mirrors can be described by shape errors and surface roughness. Whilst the trend towards sub nanometre level surfaces finishes and features progresses, matching both form and finish coherently in complex parts remains a major challenge. With increasing optic sizes, the stability of the polishing process becomes more and more important. If not empirically known, the optical surface must be measured after each polishing step. One approach is to mount sensors on the polishing head in order to measure process-relevant quantities. On the basis of these data, machine learning algorithms can be applied for surface value prediction. Due to the modification of the polishing head by the installation of sensors and the resulting process influences, the first machine learning model could only make removal predictions with insufficient accuracy. The aim of this work is to show a polishing head optimised for the sensors, which is coupled with a machine learning model in order to predict the material removal and failure of the polishing head during robot polishing. The artificial neural network is developed in the Python programming language using the Keras deep learning library. It starts with a simple network architecture and common training parameters. The model will then be optimised step-by-step using different methods and optimised in different steps. The data collected by a design of experiments with the sensor-integrated glass polishing head are used to train the machine learning model and to validate the results. The neural network achieves a prediction accuracy of the material removal of 99.22%. Article highlights First machine learning model application for robot polishing of optical glass ceramics The polishing process is influenced by a large number of different process parameters. Machine learning can be used to adjust any process parameter and predict the change in material removal with a certain probability. For a trained model,empirical experiments are no longer necessary Equipping a polishing head with sensors, which provides the possibility for 100% control


Sign in / Sign up

Export Citation Format

Share Document