scholarly journals A Review on Spam Detection Techniques for IOT Devices

Author(s):  
Nurussabah Mohammad Fahim

Abstract: The number of Internet of Things (IoT) devices in smart homes is rapidly increasing, generating massive volumes of data that is mostly transmitted over wireless communication channels. The amount of data released by these gadgets increased as well. Aside from the increasing volume, the IoT gadget generates a significant amount of data. Having varied data quality characterised by its speed in terms of time and position in various different modalities dependence. However, different IoT devices are vulnerable to different dangers, such as cyber-attacks and changing network conditions. Data leakage, connections, and so on. The properties of IoT nodes, on the other hand, make the current solutions obsolete. inadequate to cover the entire security range of IoT networks Machine learning can help in this situation. Algorithms can be useful in detecting irregularities in data, which improves the security of IoT systems. The methods are aimed at the data anomalies that exist in the smart Internet in general.

Author(s):  
Kamal Alieyan ◽  
Ammar Almomani ◽  
Rosni Abdullah ◽  
Badr Almutairi ◽  
Mohammad Alauthman

In today's internet world the internet of things (IoT) is becoming the most significant and developing technology. The primary goal behind the IoT is enabling more secure existence along with the improvement of risks at various life levels. With the arrival of IoT botnets, the perspective towards IoT products has transformed from enhanced living enabler into the internet of vulnerabilities for cybercriminals. Of all the several types of malware, botnet is considered as really a serious risk that often happens in cybercrimes and cyber-attacks. Botnet performs some predefined jobs and that too in some automated fashion. These attacks mostly occur in situations like phishing against any critical targets. Files sharing channel information are moved to DDoS attacks. IoT botnets have subjected two distinct problems, firstly, on the public internet. Most of the IoT devices are easily accessible. Secondly, in the architecture of most of the IoT units, security is usually a reconsideration. This particular chapter discusses IoT, botnet in IoT, and various botnet detection techniques available in IoT.


Author(s):  
Chandramohan Dhasarathan ◽  
Shanmugam M. ◽  
Shailesh Pancham Khapre ◽  
Alok Kumar Shukla ◽  
Achyut Shankar

The development of wireless communication in the information technological era, collecting data, and transfering it from unmanned systems or devices could be monitored by any application while it is online. Direct and aliveness of countless wireless devices in a cluster of the medium could legitimate unwanted users to interrupt easily in an information flow. It would lead to data loss and security breach. Many traditional algorithms are effectively contributed to the support of cryptography-based encryption to ensure the user's data security. IoT devices with limited transmission power constraints have to communicate with the base station, and the data collected from the zones would need optimal transmission power. There is a need for a machine learning-based algorithm or optimization algorithm to maximize data transfer in a secure and safe transmission.


Author(s):  
Kamal Alieyan ◽  
Ammar Almomani ◽  
Rosni Abdullah ◽  
Badr Almutairi ◽  
Mohammad Alauthman

In today's internet world the internet of things (IoT) is becoming the most significant and developing technology. The primary goal behind the IoT is enabling more secure existence along with the improvement of risks at various life levels. With the arrival of IoT botnets, the perspective towards IoT products has transformed from enhanced living enabler into the internet of vulnerabilities for cybercriminals. Of all the several types of malware, botnet is considered as really a serious risk that often happens in cybercrimes and cyber-attacks. Botnet performs some predefined jobs and that too in some automated fashion. These attacks mostly occur in situations like phishing against any critical targets. Files sharing channel information are moved to DDoS attacks. IoT botnets have subjected two distinct problems, firstly, on the public internet. Most of the IoT devices are easily accessible. Secondly, in the architecture of most of the IoT units, security is usually a reconsideration. This particular chapter discusses IoT, botnet in IoT, and various botnet detection techniques available in IoT.


2021 ◽  
Vol 30 (04) ◽  
pp. 2150020
Author(s):  
Luke Holbrook ◽  
Miltiadis Alamaniotis

With the increase of cyber-attacks on millions of Internet of Things (IoT) devices, the poor network security measures on those devices are the main source of the problem. This article aims to study a number of these machine learning algorithms available for their effectiveness in detecting malware in consumer internet of things devices. In particular, the Support Vector Machines (SVM), Random Forest, and Deep Neural Network (DNN) algorithms are utilized for a benchmark with a set of test data and compared as tools in safeguarding the deployment for IoT security. Test results on a set of 4 IoT devices exhibited that all three tested algorithms presented here detect the network anomalies with high accuracy. However, the deep neural network provides the highest coefficient of determination R2, and hence, it is identified as the most precise among the tested algorithms concerning the security of IoT devices based on the data sets we have undertaken.


Network ◽  
2021 ◽  
Vol 1 (1) ◽  
pp. 28-49
Author(s):  
Ehsan Ahvar ◽  
Shohreh Ahvar ◽  
Syed Mohsan Raza ◽  
Jose Manuel Sanchez Vilchez ◽  
Gyu Myoung Lee

In recent years, the number of objects connected to the internet have significantly increased. Increasing the number of connected devices to the internet is transforming today’s Internet of Things (IoT) into massive IoT of the future. It is predicted that, in a few years, a high communication and computation capacity will be required to meet the demands of massive IoT devices and applications requiring data sharing and processing. 5G and beyond mobile networks are expected to fulfill a part of these requirements by providing a data rate of up to terabits per second. It will be a key enabler to support massive IoT and emerging mission critical applications with strict delay constraints. On the other hand, the next generation of software-defined networking (SDN) with emerging cloudrelated technologies (e.g., fog and edge computing) can play an important role in supporting and implementing the above-mentioned applications. This paper sets out the potential opportunities and important challenges that must be addressed in considering options for using SDN in hybrid cloud-fog systems to support 5G and beyond-enabled applications.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1598
Author(s):  
Sigurd Frej Joel Jørgensen Ankergård ◽  
Edlira Dushku ◽  
Nicola Dragoni

The Internet of Things (IoT) ecosystem comprises billions of heterogeneous Internet-connected devices which are revolutionizing many domains, such as healthcare, transportation, smart cities, to mention only a few. Along with the unprecedented new opportunities, the IoT revolution is creating an enormous attack surface for potential sophisticated cyber attacks. In this context, Remote Attestation (RA) has gained wide interest as an important security technique to remotely detect adversarial presence and assure the legitimate state of an IoT device. While many RA approaches proposed in the literature make different assumptions regarding the architecture of IoT devices and adversary capabilities, most typical RA schemes rely on minimal Root of Trust by leveraging hardware that guarantees code and memory isolation. However, the presence of a specialized hardware is not always a realistic assumption, for instance, in the context of legacy IoT devices and resource-constrained IoT devices. In this paper, we survey and analyze existing software-based RA schemes (i.e., RA schemes not relying on specialized hardware components) through the lens of IoT. In particular, we provide a comprehensive overview of their design characteristics and security capabilities, analyzing their advantages and disadvantages. Finally, we discuss the opportunities that these RA schemes bring in attesting legacy and resource-constrained IoT devices, along with open research issues.


Author(s):  
S. Arokiaraj ◽  
Dr. N. Viswanathan

With the advent of Internet of things(IoT),HA (HA) recognition has contributed the more application in health care in terms of diagnosis and Clinical process. These devices must be aware of human movements to provide better aid in the clinical applications as well as user’s daily activity.Also , In addition to machine and deep learning algorithms, HA recognition systems has significantly improved in terms of high accurate recognition. However, the most of the existing models designed needs improvisation in terms of accuracy and computational overhead. In this research paper, we proposed a BAT optimized Long Short term Memory (BAT-LSTM) for an effective recognition of human activities using real time IoT systems. The data are collected by implanting the Internet of things) devices invasively. Then, proposed BAT-LSTM is deployed to extract the temporal features which are then used for classification to HA. Nearly 10,0000 dataset were collected and used for evaluating the proposed model. For the validation of proposed framework, accuracy, precision, recall, specificity and F1-score parameters are chosen and comparison is done with the other state-of-art deep learning models. The finding shows the proposed model outperforms the other learning models and finds its suitability for the HA recognition.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 444 ◽  
Author(s):  
Valerio Morfino ◽  
Salvatore Rampone

In the fields of Internet of Things (IoT) infrastructures, attack and anomaly detection are rising concerns. With the increased use of IoT infrastructure in every domain, threats and attacks in these infrastructures are also growing proportionally. In this paper the performances of several machine learning algorithms in identifying cyber-attacks (namely SYN-DOS attacks) to IoT systems are compared both in terms of application performances, and in training/application times. We use supervised machine learning algorithms included in the MLlib library of Apache Spark, a fast and general engine for big data processing. We show the implementation details and the performance of those algorithms on public datasets using a training set of up to 2 million instances. We adopt a Cloud environment, emphasizing the importance of the scalability and of the elasticity of use. Results show that all the Spark algorithms used result in a very good identification accuracy (>99%). Overall, one of them, Random Forest, achieves an accuracy of 1. We also report a very short training time (23.22 sec for Decision Tree with 2 million rows). The experiments also show a very low application time (0.13 sec for over than 600,000 instances for Random Forest) using Apache Spark in the Cloud. Furthermore, the explicit model generated by Random Forest is very easy-to-implement using high- or low-level programming languages. In light of the results obtained, both in terms of computation times and identification performance, a hybrid approach for the detection of SYN-DOS cyber-attacks on IoT devices is proposed: the application of an explicit Random Forest model, implemented directly on the IoT device, along with a second level analysis (training) performed in the Cloud.


Electronics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 144 ◽  
Author(s):  
Yan Naung Soe ◽  
Yaokai Feng ◽  
Paulus Insap Santosa ◽  
Rudy Hartanto ◽  
Kouichi Sakurai

The application of a large number of Internet of Things (IoT) devices makes our life more convenient and industries more efficient. However, it also makes cyber-attacks much easier to occur because so many IoT devices are deployed and most of them do not have enough resources (i.e., computation and storage capacity) to carry out ordinary intrusion detection systems (IDSs). In this study, a lightweight machine learning-based IDS using a new feature selection algorithm is designed and implemented on Raspberry Pi, and its performance is verified using a public dataset collected from an IoT environment. To make the system lightweight, we propose a new algorithm for feature selection, called the correlated-set thresholding on gain-ratio (CST-GR) algorithm, to select really necessary features. Because the feature selection is conducted on three specific kinds of cyber-attacks, the number of selected features can be significantly reduced, which makes the classifiers very small and fast. Thus, our detection system is lightweight enough to be implemented and carried out in a Raspberry Pi system. More importantly, as the really necessary features corresponding to each kind of attack are exploited, good detection performance can be expected. The performance of our proposal is examined in detail with different machine learning algorithms, in order to learn which of them is the best option for our system. The experiment results indicate that the new feature selection algorithm can select only very few features for each kind of attack. Thus, the detection system is lightweight enough to be implemented in the Raspberry Pi environment with almost no sacrifice on detection performance.


Sign in / Sign up

Export Citation Format

Share Document