scholarly journals Machine Learning and Prediction-Based Resource Management in IoT Considering Qos

Internet of Things (IoT) is one of the fast-growing technology paradigms used in every sectors, where in the Quality of Service (QoS) is a critical component in such systems and usage perspective with respect to ProSumers (producer and consumers). Most of the recent research works on QoS in IoT have used Machine Learning (ML) techniques as one of the computing methods for improved performance and solutions. The adoption of Machine Learning and its methodologies have become a common trend and need in every technologies and domain areas, such as open source frameworks, task specific algorithms and using AI and ML techniques. In this work we propose an ML based prediction model for resource optimization in the IoT environment for QoS provisioning. The proposed methodology is implemented by using a multi-layer neural network (MNN) for Long Short Term Memory (LSTM) learning in layered IoT environment. Here the model considers the resources like bandwidth and energy as QoS parameters and provides the required QoS by efficient utilization of the resources in the IoT environment. The performance of the proposed model is evaluated in a real field implementation by considering a civil construction project, where in the real data is collected by using video sensors and mobile devices as edge nodes. Performance of the prediction model is observed that there is an improved bandwidth and energy utilization in turn providing the required QoS in the IoT environment.

2021 ◽  
Vol 13 (2) ◽  
pp. 1-12
Author(s):  
Sumit Das ◽  
Manas Kumar Sanyal ◽  
Sarbajyoti Mallik

There is a lot of fake news roaming around various mediums, which misleads people. It is a big issue in this advanced intelligent era, and there is a need to find some solution to this kind of situation. This article proposes an approach that analyzes fake and real news. This analysis is focused on sentiment, significance, and novelty, which are a few characteristics of this news. The ability to manipulate daily information mathematically and statistically is allowed by expressing news reports as numbers and metadata. The objective of this article is to analyze and filter out the fake news that makes trouble. The proposed model is amalgamated with the web application; users can get real data and fake data by using this application. The authors have used the AI (artificial intelligence) algorithms, specifically logistic regression and LSTM (long short-term memory), so that the application works well. The results of the proposed model are compared with existing models.


Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1620 ◽  
Author(s):  
Ganjar Alfian ◽  
Muhammad Syafrudin ◽  
Norma Latif Fitriyani ◽  
Muhammad Anshari ◽  
Pavel Stasa ◽  
...  

Extracting information from individual risk factors provides an effective way to identify diabetes risk and associated complications, such as retinopathy, at an early stage. Deep learning and machine learning algorithms are being utilized to extract information from individual risk factors to improve early-stage diagnosis. This study proposes a deep neural network (DNN) combined with recursive feature elimination (RFE) to provide early prediction of diabetic retinopathy (DR) based on individual risk factors. The proposed model uses RFE to remove irrelevant features and DNN to classify the diseases. A publicly available dataset was utilized to predict DR during initial stages, for the proposed and several current best-practice models. The proposed model achieved 82.033% prediction accuracy, which was a significantly better performance than the current models. Thus, important risk factors for retinopathy can be successfully extracted using RFE. In addition, to evaluate the proposed prediction model robustness and generalization, we compared it with other machine learning models and datasets (nephropathy and hypertension–diabetes). The proposed prediction model will help improve early-stage retinopathy diagnosis based on individual risk factors.


Information ◽  
2021 ◽  
Vol 12 (9) ◽  
pp. 374
Author(s):  
Babacar Gaye ◽  
Dezheng Zhang ◽  
Aziguli Wulamu

With the extensive availability of social media platforms, Twitter has become a significant tool for the acquisition of peoples’ views, opinions, attitudes, and emotions towards certain entities. Within this frame of reference, sentiment analysis of tweets has become one of the most fascinating research areas in the field of natural language processing. A variety of techniques have been devised for sentiment analysis, but there is still room for improvement where the accuracy and efficacy of the system are concerned. This study proposes a novel approach that exploits the advantages of the lexical dictionary, machine learning, and deep learning classifiers. We classified the tweets based on the sentiments extracted by TextBlob using a stacked ensemble of three long short-term memory (LSTM) as base classifiers and logistic regression (LR) as a meta classifier. The proposed model proved to be effective and time-saving since it does not require feature extraction, as LSTM extracts features without any human intervention. We also compared our proposed approach with conventional machine learning models such as logistic regression, AdaBoost, and random forest. We also included state-of-the-art deep learning models in comparison with the proposed model. Experiments were conducted on the sentiment140 dataset and were evaluated in terms of accuracy, precision, recall, and F1 Score. Empirical results showed that our proposed approach manifested state-of-the-art results by achieving an accuracy score of 99%.


In this paper we propose a novel supervised machine learning model to predict the polarity of sentiments expressed in microblogs. The proposed model has a stacked neural network structure consisting of Long Short Term Memory (LSTM) and Convolutional Neural Network (CNN) layers. In order to capture the long-term dependencies of sentiments in the text ordering of a microblog, the proposed model employs an LSTM layer. The encodings produced by the LSTM layer are then fed to a CNN layer, which generates localized patterns of higher accuracy. These patterns are capable of capturing both local and global long-term dependences in the text of the microblogs. It was observed that the proposed model performs better and gives improved prediction accuracy when compared to semantic, machine learning and deep neural network approaches such as SVM, CNN, LSTM, CNN-LSTM, etc. This paper utilizes the benchmark Stanford Large Movie Review dataset to show the significance of the new approach. The prediction accuracy of the proposed approach is comparable to other state-of-art approaches.


Author(s):  
Xiaoqiang Wang ◽  
Yali Du ◽  
Shengyu Zhu ◽  
Liangjun Ke ◽  
Zhitang Chen ◽  
...  

It is a long-standing question to discover causal relations among a set of variables in many empirical sciences. Recently, Reinforcement Learning (RL) has achieved promising results in causal discovery from observational data. However, searching the space of directed graphs and enforcing acyclicity by implicit penalties tend to be inefficient and restrict the existing RL-based method to small scale problems. In this work, we propose a novel RL-based approach for causal discovery, by incorporating RL into the ordering-based paradigm. Specifically, we formulate the ordering search problem as a multi-step Markov decision process, implement the ordering generating process with an encoder-decoder architecture, and finally use RL to optimize the proposed model based on the reward mechanisms designed for each ordering. A generated ordering would then be processed using variable selection to obtain the final causal graph. We analyze the consistency and computational complexity of the proposed method, and empirically show that a pretrained model can be exploited to accelerate training. Experimental results on both synthetic and real data sets shows that the proposed method achieves a much improved performance over existing RL-based method.


2021 ◽  
Vol 11 (17) ◽  
pp. 7940
Author(s):  
Mohammed Al-Sarem ◽  
Abdullah Alsaeedi ◽  
Faisal Saeed ◽  
Wadii Boulila ◽  
Omair AmeerBakhsh

Spreading rumors in social media is considered under cybercrimes that affect people, societies, and governments. For instance, some criminals create rumors and send them on the internet, then other people help them to spread it. Spreading rumors can be an example of cyber abuse, where rumors or lies about the victim are posted on the internet to send threatening messages or to share the victim’s personal information. During pandemics, a large amount of rumors spreads on social media very fast, which have dramatic effects on people’s health. Detecting these rumors manually by the authorities is very difficult in these open platforms. Therefore, several researchers conducted studies on utilizing intelligent methods for detecting such rumors. The detection methods can be classified mainly into machine learning-based and deep learning-based methods. The deep learning methods have comparative advantages against machine learning ones as they do not require preprocessing and feature engineering processes and their performance showed superior enhancements in many fields. Therefore, this paper aims to propose a Novel Hybrid Deep Learning Model for Detecting COVID-19-related Rumors on Social Media (LSTM–PCNN). The proposed model is based on a Long Short-Term Memory (LSTM) and Concatenated Parallel Convolutional Neural Networks (PCNN). The experiments were conducted on an ArCOV-19 dataset that included 3157 tweets; 1480 of them were rumors (46.87%) and 1677 tweets were non-rumors (53.12%). The findings of the proposed model showed a superior performance compared to other methods in terms of accuracy, recall, precision, and F-score.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Chunmei Fan ◽  
Jiansheng Zhu ◽  
Haroon Elahi ◽  
Lipeng Yang ◽  
Beibei Li

Fifth-generation (5G) communication technologies and artificial intelligence enable the design and deployment of sophisticated solutions for enhanced user experience and superior network-based service delivery. However, the performance of the systems offering 5G-based services depends on various factors. In this paper, we consider the case of the online railway ticketing system in China that serves the needs of hundreds of millions of people daily. This system’s online access rates vary over time, and fluctuations are experienced, affecting its overall dependability and service quality. We use long short-term memory network, particle swarm optimization, and differential evolution to construct DP-LSTM—a hybridly optimized model to predict network flow for dependable and quality-enhanced service delivery. We evaluate the proposed model using real data collected over six months from the “12306 online ticketing” system. We compare the performance of the proposed model with mainstream network traffic prediction models. We use mean absolute percentage error, mean absolute error, and root mean square error for performance evaluation. Experimental results show the superiority of the proposed model.


2019 ◽  
Vol 3 (3) ◽  
pp. 357-363
Author(s):  
Soffa Zahara ◽  
Sugianto ◽  
M. Bahril Ilmiddafiq

Long Short Term Memory (LSTM) is known as optimized Recurrent Neural Network (RNN) architectures that overcome RNN’s lact about maintaining long period of memories. As part of machine learning networks, LSTM also notable as the right choice for time-series prediction. Currently, machine learning is a burning issue in economic world, abundant studies such predicting macroeconomic and microeconomics indicators are emerge. Inflation rate has been used for decision making for central banks also private sector. In Indonesia, CPI (Consumer Price Index) is one of best practice inflation indicators besides Wholesale Price Index and The Gross Domestic Product (GDP). Since CPI data could be used as a direction for next inflation move, we conducted CPI prediction model using LSTM method. The network model input consists of 28 variables of staple price in Surabaya and the output is CPI value, also the entire development of prediction model are done in Amazon Web Service (AWS) Cloud. In the interest of accuracy improvement, we used several optimization algorithm i.e. Stochastic Gradient Descent (sgd), Root Mean Square Propagation (RMSProp), Adaptive Gradient(AdaGrad), Adaptive moment (Adam), Adadelta, Nesterov Adam (Nadam) and Adamax. The results indicate that Nadam has 4,008 RMSE’s value, less than other algorithm which indicate the most accurate optimization algorithm to predict CPI value.


2020 ◽  
Vol 12 (21) ◽  
pp. 3654
Author(s):  
Minkyu Kim ◽  
Hyun Yang ◽  
Jonghwa Kim

Recent global warming has been accompanied by high water temperatures (HWTs) in coastal areas of Korea, resulting in huge economic losses in the marine fishery industry due to disease outbreaks in aquaculture. To mitigate these losses, it is necessary to predict such outbreaks to prevent or respond to them as early as possible. In the present study, we propose an HWT prediction method that applies sea surface temperatures (SSTs) and deep-learning technology in a long short-term memory (LSTM) model based on a recurrent neural network (RNN). The LSTM model is used to predict time series data for the target areas, including the coastal area from Goheung to Yeosu, Jeollanam-do, Korea, which has experienced frequent HWT occurrences in recent years. To evaluate the performance of the SST prediction model, we compared and analyzed the results of an existing SST prediction model for the SST data, and additional external meteorological data. The proposed model outperformed the existing model in predicting SSTs and HWTs. Although the performance of the proposed model decreased as the prediction interval increased, it consistently showed better performance than the European Center for Medium-Range Weather Forecast (ECMWF) prediction model. Therefore, the method proposed in this study may be applied to prevent future damage to the aquaculture industry.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Fang Yao ◽  
Wei Liu ◽  
Xingyong Zhao ◽  
Li Song

This paper develops an integrated machine learning and enhanced statistical approach for wind power interval forecasting. A time-series wind power forecasting model is formulated as the theoretical basis of our method. The proposed model takes into account two important characteristics of wind speed: the nonlinearity and the time-changing distribution. Based on the proposed model, six machine learning regression algorithms are employed to forecast the prediction interval of the wind power output. The six methods are tested using real wind speed data collected at a wind station in Australia. For wind speed forecasting, the long short-term memory (LSTM) network algorithm outperforms other five algorithms. In terms of the prediction interval, the five nonlinear algorithms show superior performances. The case studies demonstrate that combined with an appropriate nonlinear machine learning regression algorithm, the proposed methodology is effective in wind power interval forecasting.


Sign in / Sign up

Export Citation Format

Share Document