scholarly journals A Hybrid Optimized LSTM Models for Human Activity Recognition with IOT Devices

Author(s):  
S. Arokiaraj ◽  
Dr. N. Viswanathan

With the advent of Internet of things(IoT),HA (HA) recognition has contributed the more application in health care in terms of diagnosis and Clinical process. These devices must be aware of human movements to provide better aid in the clinical applications as well as user’s daily activity.Also , In addition to machine and deep learning algorithms, HA recognition systems has significantly improved in terms of high accurate recognition. However, the most of the existing models designed needs improvisation in terms of accuracy and computational overhead. In this research paper, we proposed a BAT optimized Long Short term Memory (BAT-LSTM) for an effective recognition of human activities using real time IoT systems. The data are collected by implanting the Internet of things) devices invasively. Then, proposed BAT-LSTM is deployed to extract the temporal features which are then used for classification to HA. Nearly 10,0000 dataset were collected and used for evaluating the proposed model. For the validation of proposed framework, accuracy, precision, recall, specificity and F1-score parameters are chosen and comparison is done with the other state-of-art deep learning models. The finding shows the proposed model outperforms the other learning models and finds its suitability for the HA recognition.

Information ◽  
2021 ◽  
Vol 12 (9) ◽  
pp. 374
Author(s):  
Babacar Gaye ◽  
Dezheng Zhang ◽  
Aziguli Wulamu

With the extensive availability of social media platforms, Twitter has become a significant tool for the acquisition of peoples’ views, opinions, attitudes, and emotions towards certain entities. Within this frame of reference, sentiment analysis of tweets has become one of the most fascinating research areas in the field of natural language processing. A variety of techniques have been devised for sentiment analysis, but there is still room for improvement where the accuracy and efficacy of the system are concerned. This study proposes a novel approach that exploits the advantages of the lexical dictionary, machine learning, and deep learning classifiers. We classified the tweets based on the sentiments extracted by TextBlob using a stacked ensemble of three long short-term memory (LSTM) as base classifiers and logistic regression (LR) as a meta classifier. The proposed model proved to be effective and time-saving since it does not require feature extraction, as LSTM extracts features without any human intervention. We also compared our proposed approach with conventional machine learning models such as logistic regression, AdaBoost, and random forest. We also included state-of-the-art deep learning models in comparison with the proposed model. Experiments were conducted on the sentiment140 dataset and were evaluated in terms of accuracy, precision, recall, and F1 Score. Empirical results showed that our proposed approach manifested state-of-the-art results by achieving an accuracy score of 99%.


2021 ◽  
Vol 5 (4) ◽  
pp. 380
Author(s):  
Abdulkareem A. Hezam ◽  
Salama A. Mostafa ◽  
Zirawani Baharum ◽  
Alde Alanda ◽  
Mohd Zaki Salikon

Distributed-Denial-of-Service impacts are undeniably significant, and because of the development of IoT devices, they are expected to continue to rise in the future. Even though many solutions have been developed to identify and prevent this assault, which is mainly targeted at IoT devices, the danger continues to exist and is now larger than ever. It is common practice to launch denial of service attacks in order to prevent legitimate requests from being completed. This is accomplished by swamping the targeted machines or resources with false requests in an attempt to overpower systems and prevent many or all legitimate requests from being completed. There have been many efforts to use machine learning to tackle puzzle-like middle-box problems and other Artificial Intelligence (AI) problems in the last few years. The modern botnets are so sophisticated that they may evolve daily, as in the case of the Mirai botnet, for example. This research presents a deep learning method based on a real-world dataset gathered by infecting nine Internet of Things devices with two of the most destructive DDoS botnets, Mirai and Bashlite, and then analyzing the results. This paper proposes the BiLSTM-CNN model that combines Bidirectional Long-Short Term Memory Recurrent Neural Network and Convolutional Neural Network (CNN). This model employs CNN for data processing and feature optimization, and the BiLSTM is used for classification. This model is evaluated by comparing its results with three standard deep learning models of CNN, Recurrent Neural Network (RNN), and long-Short Term Memory Recurrent Neural Network (LSTM–RNN). There is a huge need for more realistic datasets to fully test such models' capabilities, and where N-BaIoT comes, it also includes multi-device IoT data. The N-BaIoT dataset contains DDoS attacks with the two of the most used types of botnets: Bashlite and Mirai. The 10-fold cross-validation technique tests the four models. The obtained results show that the BiLSTM-CNN outperforms all other individual classifiers in every aspect in which it achieves an accuracy of 89.79% and an error rate of 0.1546 with a very high precision of 93.92% with an f1-score and recall of 85.73% and 89.11%, respectively. The RNN achieves the highest accuracy among the three individual models, with an accuracy of 89.77%, followed by LSTM, which achieves the second-highest accuracy of 89.71%. CNN, on the other hand, achieves the lowest accuracy among all classifiers of 89.50%.


2021 ◽  
Vol 11 (5) ◽  
pp. 2149
Author(s):  
Moumita Sen Sarma ◽  
Kaushik Deb ◽  
Pranab Kumar Dhar ◽  
Takeshi Koshiba

Sports activities play a crucial role in preserving our health and mind. Due to the rapid growth of sports video repositories, automatized classification has become essential for easy access and retrieval, content-based recommendations, contextual advertising, etc. Traditional Bangladeshi sport is a genre of sports that bears the cultural significance of Bangladesh. Classification of this genre can act as a catalyst in reviving their lost dignity. In this paper, the Deep Learning method is utilized to classify traditional Bangladeshi sports videos by extracting both the spatial and temporal features from the videos. In this regard, a new Traditional Bangladeshi Sports Video (TBSV) dataset is constructed containing five classes: Boli Khela, Kabaddi, Lathi Khela, Kho Kho, and Nouka Baich. A key contribution of this paper is to develop a scratch model by incorporating the two most prominent deep learning algorithms: convolutional neural network (CNN) and long short term memory (LSTM). Moreover, the transfer learning approach with the fine-tuned VGG19 and LSTM is used for TBSV classification. Furthermore, the proposed model is assessed over four challenging datasets: KTH, UCF-11, UCF-101, and UCF Sports. This model outperforms some recent works on these datasets while showing 99% average accuracy on the TBSV dataset.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1064
Author(s):  
I Nyoman Kusuma Wardana ◽  
Julian W. Gardner ◽  
Suhaib A. Fahmy

Accurate air quality monitoring requires processing of multi-dimensional, multi-location sensor data, which has previously been considered in centralised machine learning models. These are often unsuitable for resource-constrained edge devices. In this article, we address this challenge by: (1) designing a novel hybrid deep learning model for hourly PM2.5 pollutant prediction; (2) optimising the obtained model for edge devices; and (3) examining model performance running on the edge devices in terms of both accuracy and latency. The hybrid deep learning model in this work comprises a 1D Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) to predict hourly PM2.5 concentration. The results show that our proposed model outperforms other deep learning models, evaluated by calculating RMSE and MAE errors. The proposed model was optimised for edge devices, the Raspberry Pi 3 Model B+ (RPi3B+) and Raspberry Pi 4 Model B (RPi4B). This optimised model reduced file size to a quarter of the original, with further size reduction achieved by implementing different post-training quantisation. In total, 8272 hourly samples were continuously fed to the edge device, with the RPi4B executing the model twice as fast as the RPi3B+ in all quantisation modes. Full-integer quantisation produced the lowest execution time, with latencies of 2.19 s and 4.73 s for RPi4B and RPi3B+, respectively.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 29
Author(s):  
Manas Bazarbaev ◽  
Tserenpurev Chuluunsaikhan ◽  
Hyoseok Oh ◽  
Ga-Ae Ryu ◽  
Aziz Nasridinov ◽  
...  

Product quality is a major concern in manufacturing. In the metal processing industry, low-quality products must be remanufactured, which requires additional labor, money, and time. Therefore, user-controllable variables for machines and raw material compositions are key factors for ensuring product quality. In this study, we propose a method for generating the time-series working patterns of the control variables for metal-melting induction furnaces and continuous casting machines, thus improving product quality by aiding machine operators. We used an auxiliary classifier generative adversarial network (AC-GAN) model to generate time-series working patterns of two processes depending on product type and additional material data. To check accuracy, the difference between the generated time-series data of the model and the ground truth data was calculated. Specifically, the proposed model results were compared with those of other deep learning models: multilayer perceptron (MLP), convolutional neural network (CNN), long short-term memory (LSTM), and gated recurrent unit (GRU). It was demonstrated that the proposed model outperformed the other deep learning models. Moreover, the proposed method generated different time-series data for different inputs, whereas the other deep learning models generated the same time-series data.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Bader Alouffi ◽  
Abdullah Alharbi ◽  
Radhya Sahal ◽  
Hager Saleh

Fake news is challenging to detect due to mixing accurate and inaccurate information from reliable and unreliable sources. Social media is a data source that is not trustworthy all the time, especially in the COVID-19 outbreak. During the COVID-19 epidemic, fake news is widely spread. The best way to deal with this is early detection. Accordingly, in this work, we have proposed a hybrid deep learning model that uses convolutional neural network (CNN) and long short-term memory (LSTM) to detect COVID-19 fake news. The proposed model consists of some layers: an embedding layer, a convolutional layer, a pooling layer, an LSTM layer, a flatten layer, a dense layer, and an output layer. For experimental results, three COVID-19 fake news datasets are used to evaluate six machine learning models, two deep learning models, and our proposed model. The machine learning models are DT, KNN, LR, RF, SVM, and NB, while the deep learning models are CNN and LSTM. Also, four matrices are used to validate the results: accuracy, precision, recall, and F1-measure. The conducted experiments show that the proposed model outperforms the six machine learning models and the two deep learning models. Consequently, the proposed system is capable of detecting the fake news of COVID-19 significantly.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 643
Author(s):  
Rania M. Ghoniem ◽  
Abeer D. Algarni ◽  
Basel Refky ◽  
Ahmed A. Ewees

Ovarian cancer (OC) is a common reason for mortality among women. Deep learning has recently proven better performance in predicting OC stages and subtypes. However, most of the state-of-the-art deep learning models employ single modality data, which may afford low-level performance due to insufficient representation of important OC characteristics. Furthermore, these deep learning models still lack to the optimization of the model construction, which requires high computational cost to train and deploy them. In this work, a hybrid evolutionary deep learning model, using multi-modal data, is proposed. The established multi-modal fusion framework amalgamates gene modality alongside with histopathological image modality. Based on the different states and forms of each modality, we set up deep feature extraction network, respectively. This includes a predictive antlion-optimized long-short-term-memory model to process gene longitudinal data. Another predictive antlion-optimized convolutional neural network model is included to process histopathology images. The topology of each customized feature network is automatically set by the antlion optimization algorithm to make it realize better performance. After that the output from the two improved networks is fused based upon weighted linear aggregation. The deep fused features are finally used to predict OC stage. A number of assessment indicators was used to compare the proposed model to other nine multi-modal fusion models constructed using distinct evolutionary algorithms. This was conducted using a benchmark for OC and two benchmarks for breast and lung cancers. The results reveal that the proposed model is more precise and accurate in diagnosing OC and the other cancers.


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


Technologies ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 14
Author(s):  
James Dzisi Gadze ◽  
Akua Acheampomaa Bamfo-Asante ◽  
Justice Owusu Agyemang ◽  
Henry Nunoo-Mensah ◽  
Kwasi Adu-Boahen Opare

Software-Defined Networking (SDN) is a new paradigm that revolutionizes the idea of a software-driven network through the separation of control and data planes. It addresses the problems of traditional network architecture. Nevertheless, this brilliant architecture is exposed to several security threats, e.g., the distributed denial of service (DDoS) attack, which is hard to contain in such software-based networks. The concept of a centralized controller in SDN makes it a single point of attack as well as a single point of failure. In this paper, deep learning-based models, long-short term memory (LSTM) and convolutional neural network (CNN), are investigated. It illustrates their possibility and efficiency in being used in detecting and mitigating DDoS attack. The paper focuses on TCP, UDP, and ICMP flood attacks that target the controller. The performance of the models was evaluated based on the accuracy, recall, and true negative rate. We compared the performance of the deep learning models with classical machine learning models. We further provide details on the time taken to detect and mitigate the attack. Our results show that RNN LSTM is a viable deep learning algorithm that can be applied in the detection and mitigation of DDoS in the SDN controller. Our proposed model produced an accuracy of 89.63%, which outperformed linear-based models such as SVM (86.85%) and Naive Bayes (82.61%). Although KNN, which is a linear-based model, outperformed our proposed model (achieving an accuracy of 99.4%), our proposed model provides a good trade-off between precision and recall, which makes it suitable for DDoS classification. In addition, it was realized that the split ratio of the training and testing datasets can give different results in the performance of a deep learning algorithm used in a specific work. The model achieved the best performance when a split of 70/30 was used in comparison to 80/20 and 60/40 split ratios.


Network ◽  
2021 ◽  
Vol 1 (1) ◽  
pp. 28-49
Author(s):  
Ehsan Ahvar ◽  
Shohreh Ahvar ◽  
Syed Mohsan Raza ◽  
Jose Manuel Sanchez Vilchez ◽  
Gyu Myoung Lee

In recent years, the number of objects connected to the internet have significantly increased. Increasing the number of connected devices to the internet is transforming today’s Internet of Things (IoT) into massive IoT of the future. It is predicted that, in a few years, a high communication and computation capacity will be required to meet the demands of massive IoT devices and applications requiring data sharing and processing. 5G and beyond mobile networks are expected to fulfill a part of these requirements by providing a data rate of up to terabits per second. It will be a key enabler to support massive IoT and emerging mission critical applications with strict delay constraints. On the other hand, the next generation of software-defined networking (SDN) with emerging cloudrelated technologies (e.g., fog and edge computing) can play an important role in supporting and implementing the above-mentioned applications. This paper sets out the potential opportunities and important challenges that must be addressed in considering options for using SDN in hybrid cloud-fog systems to support 5G and beyond-enabled applications.


Sign in / Sign up

Export Citation Format

Share Document