Survey of Machine Learning Algorithms to Detect Malware in Consumer Internet of Things Devices

2021 ◽  
Vol 30 (04) ◽  
pp. 2150020
Author(s):  
Luke Holbrook ◽  
Miltiadis Alamaniotis

With the increase of cyber-attacks on millions of Internet of Things (IoT) devices, the poor network security measures on those devices are the main source of the problem. This article aims to study a number of these machine learning algorithms available for their effectiveness in detecting malware in consumer internet of things devices. In particular, the Support Vector Machines (SVM), Random Forest, and Deep Neural Network (DNN) algorithms are utilized for a benchmark with a set of test data and compared as tools in safeguarding the deployment for IoT security. Test results on a set of 4 IoT devices exhibited that all three tested algorithms presented here detect the network anomalies with high accuracy. However, the deep neural network provides the highest coefficient of determination R2, and hence, it is identified as the most precise among the tested algorithms concerning the security of IoT devices based on the data sets we have undertaken.

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Hang Chen ◽  
Sulaiman Khan ◽  
Bo Kou ◽  
Shah Nazir ◽  
Wei Liu ◽  
...  

Generally, the emergence of Internet of Things enabled applications inspired the world during the last few years, providing state-of-the-art and novel-based solutions for different problems. This evolutionary field is mainly lead by wireless sensor network, radio frequency identification, and smart mobile technologies. Among others, the IoT plays a key role in the form of smart medical devices and wearables, with the ability to collect varied and longitudinal patient-generated health data, and at the same time also offering preliminary diagnosis options. In terms of efforts made for helping the patients using IoT-based solutions, experts exploit capabilities of the machine learning algorithms to provide efficient solutions in hemorrhage diagnosis. To reduce the death rates and propose accurate treatment, this paper presents a smart IoT-based application using machine learning algorithms for the human brain hemorrhage diagnosis. Based on the computerized tomography scan images for intracranial dataset, the support vector machine and feedforward neural network have been applied for the classification purposes. Overall, classification results of 80.67% and 86.7% are calculated for the support vector machine and feedforward neural network, respectively. It is concluded from the resultant analysis that the feedforward neural network outperforms in classifying intracranial images. The output generated from the classification tool gives information about the type of brain hemorrhage that ultimately helps in validating expert’s diagnosis and is treated as a learning tool for trainee radiologists to minimize the errors in the available systems.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 444 ◽  
Author(s):  
Valerio Morfino ◽  
Salvatore Rampone

In the fields of Internet of Things (IoT) infrastructures, attack and anomaly detection are rising concerns. With the increased use of IoT infrastructure in every domain, threats and attacks in these infrastructures are also growing proportionally. In this paper the performances of several machine learning algorithms in identifying cyber-attacks (namely SYN-DOS attacks) to IoT systems are compared both in terms of application performances, and in training/application times. We use supervised machine learning algorithms included in the MLlib library of Apache Spark, a fast and general engine for big data processing. We show the implementation details and the performance of those algorithms on public datasets using a training set of up to 2 million instances. We adopt a Cloud environment, emphasizing the importance of the scalability and of the elasticity of use. Results show that all the Spark algorithms used result in a very good identification accuracy (>99%). Overall, one of them, Random Forest, achieves an accuracy of 1. We also report a very short training time (23.22 sec for Decision Tree with 2 million rows). The experiments also show a very low application time (0.13 sec for over than 600,000 instances for Random Forest) using Apache Spark in the Cloud. Furthermore, the explicit model generated by Random Forest is very easy-to-implement using high- or low-level programming languages. In light of the results obtained, both in terms of computation times and identification performance, a hybrid approach for the detection of SYN-DOS cyber-attacks on IoT devices is proposed: the application of an explicit Random Forest model, implemented directly on the IoT device, along with a second level analysis (training) performed in the Cloud.


2018 ◽  
Vol 8 (8) ◽  
pp. 1280 ◽  
Author(s):  
Yong Kim ◽  
Youngdoo Son ◽  
Wonjoon Kim ◽  
Byungki Jin ◽  
Myung Yun

Sitting on a chair in an awkward posture or sitting for a long period of time is a risk factor for musculoskeletal disorders. A postural habit that has been formed cannot be changed easily. It is important to form a proper postural habit from childhood as the lumbar disease during childhood caused by their improper posture is most likely to recur. Thus, there is a need for a monitoring system that classifies children’s sitting postures. The purpose of this paper is to develop a system for classifying sitting postures for children using machine learning algorithms. The convolutional neural network (CNN) algorithm was used in addition to the conventional algorithms: Naïve Bayes classifier (NB), decision tree (DT), neural network (NN), multinomial logistic regression (MLR), and support vector machine (SVM). To collect data for classifying sitting postures, a sensing cushion was developed by mounting a pressure sensor mat (8 × 8) inside children’s chair seat cushion. Ten children participated, and sensor data was collected by taking a static posture for the five prescribed postures. The accuracy of CNN was found to be the highest as compared with those of the other algorithms. It is expected that the comprehensive posture monitoring system would be established through future research on enhancing the classification algorithm and providing an effective feedback system.


2020 ◽  
Vol 12 (11) ◽  
pp. 1838 ◽  
Author(s):  
Zhao Zhang ◽  
Paulo Flores ◽  
C. Igathinathane ◽  
Dayakar L. Naik ◽  
Ravi Kiran ◽  
...  

The current mainstream approach of using manual measurements and visual inspections for crop lodging detection is inefficient, time-consuming, and subjective. An innovative method for wheat lodging detection that can overcome or alleviate these shortcomings would be welcomed. This study proposed a systematic approach for wheat lodging detection in research plots (372 experimental plots), which consisted of using unmanned aerial systems (UAS) for aerial imagery acquisition, manual field evaluation, and machine learning algorithms to detect the occurrence or not of lodging. UAS imagery was collected on three different dates (23 and 30 July 2019, and 8 August 2019) after lodging occurred. Traditional machine learning and deep learning were evaluated and compared in this study in terms of classification accuracy and standard deviation. For traditional machine learning, five types of features (i.e. gray level co-occurrence matrix, local binary pattern, Gabor, intensity, and Hu-moment) were extracted and fed into three traditional machine learning algorithms (i.e., random forest (RF), neural network, and support vector machine) for detecting lodged plots. For the datasets on each imagery collection date, the accuracies of the three algorithms were not significantly different from each other. For any of the three algorithms, accuracies on the first and last date datasets had the lowest and highest values, respectively. Incorporating standard deviation as a measurement of performance robustness, RF was determined as the most satisfactory. Regarding deep learning, three different convolutional neural networks (simple convolutional neural network, VGG-16, and GoogLeNet) were tested. For any of the single date datasets, GoogLeNet consistently had superior performance over the other two methods. Further comparisons between RF and GoogLeNet demonstrated that the detection accuracies of the two methods were not significantly different from each other (p > 0.05); hence, the choice of any of the two would not affect the final detection accuracies. However, considering the fact that the average accuracy of GoogLeNet (93%) was larger than RF (91%), it was recommended to use GoogLeNet for wheat lodging detection. This research demonstrated that UAS RGB imagery, coupled with the GoogLeNet machine learning algorithm, can be a novel, reliable, objective, simple, low-cost, and effective (accuracy > 90%) tool for wheat lodging detection.


Water ◽  
2017 ◽  
Vol 9 (2) ◽  
pp. 105 ◽  
Author(s):  
Francesco Granata ◽  
Stefano Papirio ◽  
Giovanni Esposito ◽  
Rudy Gargano ◽  
Giovanni De Marinis

Stormwater runoff is often contaminated by human activities. Stormwater discharge into water bodies significantly contributes to environmental pollution. The choice of suitable treatment technologies is dependent on the pollutant concentrations. Wastewater quality indicators such as biochemical oxygen demand (BOD5), chemical oxygen demand (COD), total suspended solids (TSS), and total dissolved solids (TDS) give a measure of the main pollutants. The aim of this study is to provide an indirect methodology for the estimation of the main wastewater quality indicators, based on some characteristics of the drainage basin. The catchment is seen as a black box: the physical processes of accumulation, washing, and transport of pollutants are not mathematically described. Two models deriving from studies on artificial intelligence have been used in this research: Support Vector Regression (SVR) and Regression Trees (RT). Both the models showed robustness, reliability, and high generalization capability. However, with reference to coefficient of determination R2 and root‐mean square error, Support Vector Regression showed a better performance than Regression Tree in predicting TSS, TDS, and COD. As regards BOD5, the two models showed a comparable performance. Therefore, the considered machine learning algorithms may be useful for providing an estimation of the values to be considered for the sizing of the treatment units in absence of direct measures.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0258788
Author(s):  
Sarra Ayouni ◽  
Fahima Hajjej ◽  
Mohamed Maddeh ◽  
Shaha Al-Otaibi

The educational research is increasingly emphasizing the potential of student engagement and its impact on performance, retention and persistence. This construct has emerged as an important paradigm in the higher education field for many decades. However, evaluating and predicting the student’s engagement level in an online environment remains a challenge. The purpose of this study is to suggest an intelligent predictive system that predicts the student’s engagement level and then provides the students with feedback to enhance their motivation and dedication. Three categories of students are defined depending on their engagement level (Not Engaged, Passively Engaged, and Actively Engaged). We applied three different machine-learning algorithms, namely Decision Tree, Support Vector Machine and Artificial Neural Network, to students’ activities recorded in Learning Management System reports. The results demonstrate that machine learning algorithms could predict the student’s engagement level. In addition, according to the performance metrics of the different algorithms, the Artificial Neural Network has a greater accuracy rate (85%) compared to the Support Vector Machine (80%) and Decision Tree (75%) classification techniques. Based on these results, the intelligent predictive system sends feedback to the students and alerts the instructor once a student’s engagement level decreases. The instructor can identify the students’ difficulties during the course and motivate them through e-mail reminders, course messages, or scheduling an online meeting.


Author(s):  
Akshay Rajendra Naik ◽  
A. V. Deorankar ◽  
P. B. Ambhore

Rainfall prediction is useful for all people for decision making in all fields, such as out door gamming, farming, traveling, and factory and for other activities. We studied various methods for rainfall prediction such as machine learning and neural networks. There is various machine learning algorithms are used in previous existing methods such as naïve byes, support vector machines, random forest, decision trees, and ensemble learning methods. We used deep neural network for rainfall prediction, and for optimization of deep neural network Adam optimizer is used for setting modal parameters, as a result our method gives better results as compare to other machine learning methods.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-15
Author(s):  
Zeynep Hilal Kilimci ◽  
Aykut Güven ◽  
Mitat Uysal ◽  
Selim Akyokus

Nowadays, smart devices as a part of daily life collect data about their users with the help of sensors placed on them. Sensor data are usually physical data but mobile applications collect more than physical data like device usage habits and personal interests. Collected data are usually classified as personal, but they contain valuable information about their users when it is analyzed and interpreted. One of the main purposes of personal data analysis is to make predictions about users. Collected data can be divided into two major categories: physical and behavioral data. Behavioral data are also named as neurophysical data. Physical and neurophysical parameters are collected as a part of this study. Physical data contains measurements of the users like heartbeats, sleep quality, energy, movement/mobility parameters. Neurophysical data contain keystroke patterns like typing speed and typing errors. Users’ emotional/mood statuses are also investigated by asking daily questions. Six questions are asked to the users daily in order to determine the mood of them. These questions are emotion-attached questions, and depending on the answers, users’ emotional states are graded. Our aim is to show that there is a connection between users’ physical/neurophysical parameters and mood/emotional conditions. To prove our hypothesis, we collect and measure physical and neurophysical parameters of 15 users for 1 year. The novelty of this work to the literature is the usage of both combinations of physical and neurophysical parameters. Another novelty is that the emotion classification task is performed by both conventional machine learning algorithms and deep learning models. For this purpose, Feedforward Neural Network (FFNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and Long Short-Term Memory (LSTM) neural network are employed as deep learning methodologies. Multinomial Naïve Bayes (MNB), Support Vector Regression (SVR), Decision Tree (DT), Random Forest (RF), and Decision Integration Strategy (DIS) are evaluated as conventional machine learning algorithms. To the best of our knowledge, this is the very first attempt to analyze the neurophysical conditions of the users by evaluating deep learning models for mood analysis and enriching physical characteristics with neurophysical parameters. Experiment results demonstrate that the utilization of deep learning methodologies and the combination of both physical and neurophysical parameters enhances the classification success of the system to interpret the mood of the users. A wide range of comparative and extensive experiments shows that the proposed model exhibits noteworthy results compared to the state-of-art studies.


Symmetry ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 71
Author(s):  
Sayeed Rushd ◽  
Noor Hafsa ◽  
Majdi Al-Faiad ◽  
Md Arifuzzaman

The traditional procedure of predicting the settling velocity of a spherical particle is inconvenient as it involves iterations, complex correlations, and an unpredictable degree of uncertainty. The limitations can be addressed efficiently with artificial intelligence-based machine-learning algorithms (MLAs). The limited number of isolated studies conducted to date were constricted to specific fluid rheology, a particular MLA, and insufficient data. In the current study, the generalized application of ML was comprehensively investigated for Newtonian and three varieties of non-Newtonian fluids such as Power-law, Bingham, and Herschel Bulkley. A diverse set of nine MLAs were trained and tested using a large dataset of 967 samples. The ranges of generalized particle Reynolds number (ReG) and drag coefficient (CD) for the dataset were 10−3 < ReG (-) < 104 and 10−1 < CD (-) < 105, respectively. The performances of the models were statistically evaluated using an evaluation metric of the coefficient-of-determination (R2), root-mean-square-error (RMSE), mean-squared-error (MSE), and mean-absolute-error (MAE). The support vector regression with polynomial kernel demonstrated the optimum performance with R2 = 0.92, RMSE = 0.066, MSE = 0.0044, and MAE = 0.044. Its generalization capability was validated using the ten-fold-cross-validation technique, leave-one-feature-out experiment, and leave-one-data-set-out validation. The outcome of the current investigation was a generalized approach to modeling the settling velocity.


Sign in / Sign up

Export Citation Format

Share Document