scholarly journals Identification of Network Traffic over IOT Platforms

2021 ◽  
Vol 35 (4) ◽  
pp. 349-357
Author(s):  
Shilpa P. Khedkar ◽  
Aroul Canessane Ramalingam

The Internet of Things (IoT) is a rising infrastructure of 21st century. The classification of traffic over IoT networks is attained significance importance due to rapid growth of users and devices. It is need of the hour to isolate the normal traffic from the malicious traffic and to assign the normal traffic to the proper destination to suffice the QoS requirements of the IoT users. Detection of malicious traffic can be done by continuously monitoring traffic for suspicious links, files, connection created and received, unrecognised protocol/port numbers, and suspicious Destination/Source IP combinations. A proficient classification mechanism in IoT environment should be capable enough to classify the heavy traffic in a fast manner, to deflect the malevolent traffic on time and to transmit the benign traffic to the designated nodes for serving the needs of the users. In this work, adaboost and Xgboost machine learning algorithms and Deep Neural Networks approach are proposed to separate the IoT traffic which eventually enhances the throughput of IoT networks and reduces the congestion over IoT channels. The result of experiment indicates a deep learning algorithm achieves higher accuracy compared to machine learning algorithms.

2019 ◽  
Vol 2019 ◽  
pp. 1-9
Author(s):  
Sheng Huang ◽  
Xiaofei Fan ◽  
Lei Sun ◽  
Yanlu Shen ◽  
Xuesong Suo

Traditionally, the classification of seed defects mainly relies on the characteristics of color, shape, and texture. This method requires repeated extraction of a large amount of feature information, which is not efficiently used in detection. In recent years, deep learning has performed well in the field of image recognition. We introduced convolutional neural networks (CNNs) and transfer learning into the quality classification of seeds and compared them with traditional machine learning algorithms. Experiments showed that deep learning algorithm was significantly better than the machine learning algorithm with an accuracy of 95% (GoogLeNet) vs. 79.2% (SURF+SVM). We used three classifiers in GoogLeNet to demonstrate that network accuracy increases as the depth of the network increases. We used the visualization technology to obtain the feature map of each layer of the network in CNNs and used the heat map to represent the probability distribution of the inference results. As an end-to-end network, CNNs can be easily applied for automated seed manufacturing.


Author(s):  
Zouiten Mohammed ◽  
Chaaouan Hanae ◽  
Setti Larbi

Forest fires have caused considerable losses to ecologies, societies and economies worldwide. To minimize these losses and reduce forest fires, modeling and predicting the occurrence of forest fires are meaningful because they can support forest fire prevention and management. In recent years, the convolutional neural network (CNN) has become an important state-of-the-art deep learning algorithm, and its implementation has enriched many fields. Therefore, a competitive spatial prediction model for automatic early detection of wild forest fire using machine learning algorithms can be proposed. This model can help researchers to predict forest fires and identify risk zonas. System using machine learning algorithm on geodata will be able to notify in real time the interested parts and authorities by providing alerts and presenting on maps based on geographical treatments for more efficacity and analyzing of the situation. This research extends the application of machine learning algorithms for early fire forest prediction to detection and representation in geographical information system (GIS) maps.


2021 ◽  
Vol 12 ◽  
Author(s):  
Suk-Young Kim ◽  
Taesung Park ◽  
Kwonyoung Kim ◽  
Jihoon Oh ◽  
Yoonjae Park ◽  
...  

Purpose: The number of patients with alcohol-related problems is steadily increasing. A large-scale survey of alcohol-related problems has been conducted. However, studies that predict hazardous drinkers and identify which factors contribute to the prediction are limited. Thus, the purpose of this study was to predict hazardous drinkers and the severity of alcohol-related problems of patients using a deep learning algorithm based on a large-scale survey data.Materials and Methods: Datasets of National Health and Nutrition Examination Survey of South Korea (K-NHANES), a nationally representative survey for the entire South Korean population, were used to train deep learning and conventional machine learning algorithms. Datasets from 69,187 and 45,672 participants were used to predict hazardous drinkers and the severity of alcohol-related problems, respectively. Based on the degree of contribution of each variable to deep learning, it was possible to determine which variable contributed significantly to the prediction of hazardous drinkers.Results: Deep learning showed the higher performance than conventional machine learning algorithms. It predicted hazardous drinkers with an AUC (Area under the receiver operating characteristic curve) of 0.870 (Logistic regression: 0.858, Linear SVM: 0.849, Random forest classifier: 0.810, K-nearest neighbors: 0.740). Among 325 variables for predicting hazardous drinkers, energy intake was a factor showing the greatest contribution to the prediction, followed by carbohydrate intake. Participants were classified into Zone I, Zone II, Zone III, and Zone IV based on the degree of alcohol-related problems, showing AUCs of 0.881, 0.774, 0.853, and 0.879, respectively.Conclusion: Hazardous drinking groups could be effectively predicted and individuals could be classified according to the degree of alcohol-related problems using a deep learning algorithm. This algorithm could be used to screen people who need treatment for alcohol-related problems among the general population or hospital visitors.


Author(s):  
Shilpa P Khedkar, Et. al.

The Internet of Things (IoT) is emerging as a new infrastructure of 21st century. With the advent of cloud computing and evolution of IoT, the classification of traffic over IoT networks has attained significance importance due to rapid growth of users and devices. It is need of the hour to isolate the benign traffic from the malevolent traffic and to channelise the normal traffic to the intended destination to suffice the QoS requirements of the IoT users. A proficient classification mechanism in IoT environment should be capable enough to classify the heavy traffic in a fast manner, to deflect the malevolent traffic on time and to transmit the benign traffic to the designated nodes for serving the needs of the users. In this manuscript, machine learning and deep neural networks-based approaches are proposed for segregating the IoT traffic which eventually enhances the throughput of IoT networks and reduces the congestion over IoT channels. This paper also provides insights into the future research endeavors to channelise the normal traffic and to handle the malicious traffic


Computers ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 113
Author(s):  
James Coe ◽  
Mustafa Atay

The research aims to evaluate the impact of race in facial recognition across two types of algorithms. We give a general insight into facial recognition and discuss four problems related to facial recognition. We review our system design, development, and architectures and give an in-depth evaluation plan for each type of algorithm, dataset, and a look into the software and its architecture. We thoroughly explain the results and findings of our experimentation and provide analysis for the machine learning algorithms and deep learning algorithms. Concluding the investigation, we compare the results of two kinds of algorithms and compare their accuracy, metrics, miss rates, and performances to observe which algorithms mitigate racial bias the most. We evaluate racial bias across five machine learning algorithms and three deep learning algorithms using racially imbalanced and balanced datasets. We evaluate and compare the accuracy and miss rates between all tested algorithms and report that SVC is the superior machine learning algorithm and VGG16 is the best deep learning algorithm based on our experimental study. Our findings conclude the algorithm that mitigates the bias the most is VGG16, and all our deep learning algorithms outperformed their machine learning counterparts.


2020 ◽  
pp. 1-11
Author(s):  
Jie Liu ◽  
Lin Lin ◽  
Xiufang Liang

The online English teaching system has certain requirements for the intelligent scoring system, and the most difficult stage of intelligent scoring in the English test is to score the English composition through the intelligent model. In order to improve the intelligence of English composition scoring, based on machine learning algorithms, this study combines intelligent image recognition technology to improve machine learning algorithms, and proposes an improved MSER-based character candidate region extraction algorithm and a convolutional neural network-based pseudo-character region filtering algorithm. In addition, in order to verify whether the algorithm model proposed in this paper meets the requirements of the group text, that is, to verify the feasibility of the algorithm, the performance of the model proposed in this study is analyzed through design experiments. Moreover, the basic conditions for composition scoring are input into the model as a constraint model. The research results show that the algorithm proposed in this paper has a certain practical effect, and it can be applied to the English assessment system and the online assessment system of the homework evaluation system algorithm system.


Telecom IT ◽  
2019 ◽  
Vol 7 (3) ◽  
pp. 50-55
Author(s):  
D. Saharov ◽  
D. Kozlov

The article deals with the СoAP Protocol that regulates the transmission and reception of information traf-fic by terminal devices in IoT networks. The article describes a model for detecting abnormal traffic in 5G/IoT networks using machine learning algorithms, as well as the main methods for solving this prob-lem. The relevance of the article is due to the wide spread of the Internet of things and the upcoming update of mobile networks to the 5g generation.


2020 ◽  
Vol 98 (Supplement_4) ◽  
pp. 126-127
Author(s):  
Lucas S Lopes ◽  
Christine F Baes ◽  
Dan Tulpan ◽  
Luis Artur Loyola Chardulo ◽  
Otavio Machado Neto ◽  
...  

Abstract The aim of this project is to compare some of the state-of-the-art machine learning algorithms on the classification of steers finished in feedlots based on performance, carcass and meat quality traits. The precise classification of animals allows for fast, real-time decision making in animal food industry, such as culling or retention of herd animals. Beef production presents high variability in its numerous carcass and beef quality traits. Machine learning algorithms and software provide an opportunity to evaluate the interactions between traits to better classify animals. Four different treatment levels of wet distiller’s grain were applied to 97 Angus-Nellore animals and used as features for the classification problem. The C4.5 decision tree, Naïve Bayes (NB), Random Forest (RF) and Multilayer Perceptron (MLP) Artificial Neural Network algorithms were used to predict and classify the animals based on recorded traits measurements, which include initial and final weights, sheer force and meat color. The top performing classifier was the C4.5 decision tree algorithm with a classification accuracy of 96.90%, while the RF, the MLP and NB classifiers had accuracies of 55.67%, 39.17% and 29.89% respectively. We observed that the final decision tree model constructed with C4.5 selected only the dry matter intake (DMI) feature as a differentiator. When DMI was removed, no other feature or combination of features was sufficiently strong to provide good prediction accuracies for any of the classifiers. We plan to investigate in a follow-up study on a significantly larger sample size, the reasons behind DMI being a more relevant parameter than the other measurements.


2021 ◽  
Vol 11 (11) ◽  
pp. 5230
Author(s):  
Isabel Santiago ◽  
Jorge Luis Esquivel-Martin ◽  
David Trillo-Montero ◽  
Rafael Jesús Real-Calvo ◽  
Víctor Pallarés-López

In this work, the automatic classification of daily irradiance profiles registered in a photovoltaic installation located in the south of Spain was carried out for a period of nine years, with a sampling frequency of 5 min, and the subsequent analysis of the operation of the elements of the installation on each type of day was also performed. The classification was based on the total daily irradiance values and the fluctuations of this parameter throughout the day. The irradiance profiles were grouped into nine different categories using unsupervised machine learning algorithms for clustering, implemented in Python. It was found that the behaviour of the modules and the inverter of the installation was influenced by the type of day obtained, such that the latter worked with a better average efficiency on days with higher irradiance and lower fluctuations. However, the modules worked with better average efficiency on days with irradiance fluctuations than on clear sky days. This behaviour of the modules may be due to the presence, on days with passing clouds, of the phenomenon known as cloud enhancement, in which, due to reflections of radiation on the edges of the clouds, irradiance values can be higher at certain moments than those that occur on clear sky days, without passing clouds. This is due to the higher energy generated during these irradiance peaks and to the lower temperatures that the module reaches due to the shaded areas created by the clouds, resulting in a reduction in its temperature losses.


Sign in / Sign up

Export Citation Format

Share Document