scholarly journals Enhancing Security on IoT Devices via Machine Learning on Conditional Power Dissipation

Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1799
Author(s):  
Dimitrios Myridakis ◽  
Stefanos Papafotikas ◽  
Konstantinos Kalovrektis ◽  
Athanasios Kakarountas

The rapid development of connected devices and the sensitive data, which they produce, is a major challenge for manufacturers seeking to fully protect their devices from attack. Consumers expect their IoT devices and data to be adequately protected against a wide range of vulnerabilities and exploits. Successful attacks target IoT devices, cause security problems, and pose new challenges. Successful attacks from botnets residing on mastered IoT devices increase significantly in number and the severity of the damage they cause is similar to that of a war. The characteristics of attacks vary widely from attack to attack and from time to time. The warnings about the severity of the attacks indicate that there is a need for solutions to address the attacks from birth. In addition, there is a need to quarantine infected IoT devices, preventing the spread of the virus and thus the formation of the botnet. This work introduces the exploitation of side-channel attack techniques to protect the low-cost smart devices intuitively, and integrates a machine learning-based algorithm for Intrusion Detection, exploiting current supply characteristic dissipation. The results of this work showed successful detection of abnormal behavior of smart IoT devices.

Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 600
Author(s):  
Gianluca Cornetta ◽  
Abdellah Touhafi

Low-cost, high-performance embedded devices are proliferating and a plethora of new platforms are available on the market. Some of them either have embedded GPUs or the possibility to be connected to external Machine Learning (ML) algorithm hardware accelerators. These enhanced hardware features enable new applications in which AI-powered smart objects can effectively and pervasively run in real-time distributed ML algorithms, shifting part of the raw data analysis and processing from cloud or edge to the device itself. In such context, Artificial Intelligence (AI) can be considered as the backbone of the next generation of Internet of the Things (IoT) devices, which will no longer merely be data collectors and forwarders, but really “smart” devices with built-in data wrangling and data analysis features that leverage lightweight machine learning algorithms to make autonomous decisions on the field. This work thoroughly reviews and analyses the most popular ML algorithms, with particular emphasis on those that are more suitable to run on resource-constrained embedded devices. In addition, several machine learning algorithms have been built on top of a custom multi-dimensional array library. The designed framework has been evaluated and its performance stressed on Raspberry Pi III- and IV-embedded computers.


Author(s):  
Muhammad Naveed Aman ◽  
Kee Chaing Chua ◽  
Biplab Sikdar

IoT is the enabling technology for a variety of new exciting services in a wide range of application areas including environmental monitoring, healthcare systems, energy management, transportation, and home and commercial automation. However, the low-cost and straightforward nature of IoT devices producing vast amounts of sensitive data raises many security concerns. Among the cyber threats, hardware-level threats are especially crucial for IoT systems. In particular, IoT devices are not physically protected and can easily be captured by an adversary to launch physical and side-channel attacks. This chapter introduces security protocols for IoT devices based on hardware security primitives called physically unclonable functions (PUFs). The protocols are discussed for the following major security principles: authentication and confidentiality, data provenance, and anonymity. The security analysis shows that security protocols based on hardware security primitives are not only secure against network-level threats but are also resilient against physical and side-channel attacks.


2021 ◽  
Vol 17 (3) ◽  
pp. 1-25
Author(s):  
Guangrong Zhao ◽  
Bowen Du ◽  
Yiran Shen ◽  
Zhenyu Lao ◽  
Lizhen Cui ◽  
...  

In this article, we propose, LeaD , a new vibration-based communication protocol to Lea rn the unique patterns of vibration to D ecode the short messages transmitted to smart IoT devices. Unlike the existing vibration-based communication protocols that decode the short messages symbol-wise, either in binary or multi-ary, the message recipient in LeaD receives vibration signals corresponding to bits-groups. Each group consists of multiple symbols sent in a burst and the receiver decodes the group of symbols as a whole via machine learning-based approach. The fundamental behind LeaD is different combinations of symbols (1 s or 0 s) in a group will produce unique and reproducible patterns of vibration. Therefore, decoding in vibration-based communication can be modeled as a pattern classification problem. We design and implement a number of different machine learning models as the core engine of the decoding algorithm of LeaD to learn and recognize the vibration patterns. Through the intensive evaluations on large amount of datasets collected, the Convolutional Neural Network (CNN)-based model achieves the highest accuracy of decoding (i.e., lowest error rate), which is up to 97% at relatively high bits rate of 40 bits/s. While its competing vibration-based communication protocols can only achieve transmission rate of 10 bits/s and 20 bits/s with similar decoding accuracy. Furthermore, we evaluate its performance under different challenging practical settings and the results show that LeaD with CNN engine is robust to poses, distances (within valid range), and types of devices, therefore, a CNN model can be generally trained beforehand and widely applicable for different IoT devices under different circumstances. Finally, we implement LeaD on both off-the-shelf smartphone and smart watch to measure the detailed resources consumption on smart devices. The computation time and energy consumption of its different components show that LeaD is lightweight and can run in situ on low-cost smart IoT devices, e.g., smartwatches, without accumulated delay and introduces only marginal system overhead.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3338
Author(s):  
Ivan Vajs ◽  
Dejan Drajic ◽  
Nenad Gligoric ◽  
Ilija Radovanovic ◽  
Ivan Popovic

Existing government air quality monitoring networks consist of static measurement stations, which are highly reliable and accurately measure a wide range of air pollutants, but they are very large, expensive and require significant amounts of maintenance. As a promising solution, low-cost sensors are being introduced as complementary, air quality monitoring stations. These sensors are, however, not reliable due to the lower accuracy, short life cycle and corresponding calibration issues. Recent studies have shown that low-cost sensors are affected by relative humidity and temperature. In this paper, we explore methods to additionally improve the calibration algorithms with the aim to increase the measurement accuracy considering the impact of temperature and humidity on the readings, by using machine learning. A detailed comparative analysis of linear regression, artificial neural network and random forest algorithms are presented, analyzing their performance on the measurements of CO, NO2 and PM10 particles, with promising results and an achieved R2 of 0.93–0.97, 0.82–0.94 and 0.73–0.89 dependent on the observed period of the year, respectively, for each pollutant. A comprehensive analysis and recommendations on how low-cost sensors could be used as complementary monitoring stations to the reference ones, to increase spatial and temporal measurement resolution, is provided.


2018 ◽  
Vol 10 (3) ◽  
pp. 61-83 ◽  
Author(s):  
Deepali Chaudhary ◽  
Kriti Bhushan ◽  
B.B. Gupta

This article describes how cloud computing has emerged as a strong competitor against traditional IT platforms by offering low-cost and “pay-as-you-go” computing potential and on-demand provisioning of services. Governments, as well as organizations, have migrated their entire or most of the IT infrastructure to the cloud. With the emergence of IoT devices and big data, the amount of data forwarded to the cloud has increased to a huge extent. Therefore, the paradigm of cloud computing is no longer sufficient. Furthermore, with the growth of demand for IoT solutions in organizations, it has become essential to process data quickly, substantially and on-site. Hence, Fog computing is introduced to overcome these drawbacks of cloud computing by bringing intelligence to the edge of the network using smart devices. One major security issue related to the cloud is the DDoS attack. This article discusses in detail about the DDoS attack, cloud computing, fog computing, how DDoS affect cloud environment and how fog computing can be used in a cloud environment to solve a variety of problems.


2021 ◽  
Vol 4 ◽  
pp. 98-100
Author(s):  
Semen Gorokhovskyi ◽  
Yelyzaveta Pyrohova

With the rapid development of applications for mobile platforms, developers from around the world already understand the need to impress with new technologies and the creation of such applications, with which the consumer will plunge into the world of virtual or augmented reality. Some of the world’s most popular mobile operating systems, Android and iOS, already have some well-known tools to make it easier to work with the machine learning industry and augmented reality technology. However, it cannot be said that their use has already reached its peak, as these technologies are at the stage of active study and development. Every year the demand for mobile application developers increases, and therefore more questions arise as to how and from which side it is better to approach immersion in augmented reality and machine learning. From a tourist point of view, there are already many applications that, with the help of these technologies, will provide more information simply by pointing the camera at a specific object.Augmented Reality (AR) is a technology that allows you to see the real environment right in front of us with a digital complement superimposed on it. Thanks to Ivan Sutherland’s first display, created in 1968 under the name «Sword of Damocles», paved the way for the development of AR, which is still used today.Augmented reality can be divided into two forms: based on location and based on vision. Location-based reality provides a digital picture to the user when moving through a physical area thanks to a GPS-enabled device. With a story or information, you can learn more details about a particular location. If you use AR based on vision, certain user actions will only be performed when the camera is aimed at the target object.Thanks to advances in technology that are happening every day, easy access to smart devices can be seen as the main engine of AR technology. As the smartphone market continues to grow, consumers have the opportunity to use their devices to interact with all types of digital information. The experience of using a smartphone to combine the real and digital world is becoming more common. The success of AR applications in the last decade has been due to the proliferation and use of smartphones that have the capabilities needed to work with the application itself. If companies want to remain competitive in their field, it is advisable to consider work that will be related to AR.However, analyzing the market, one can see that there are no such applications for future entrants to higher education institutions. This means that anyone can bring a camera to the university building and learn important information. The UniApp application based on the existing Swift and Watson Studio technologies was developed to simplify obtaining information on higher education institutions.


Beverages ◽  
2019 ◽  
Vol 5 (4) ◽  
pp. 62 ◽  
Author(s):  
Claudia Gonzalez Viejo ◽  
Damir D. Torrico ◽  
Frank R. Dunshea ◽  
Sigfredo Fuentes

Beverages is a broad and important category within the food industry, which is comprised of a wide range of sub-categories and types of drinks with different levels of complexity for their manufacturing and quality assessment. Traditional methods to evaluate the quality traits of beverages consist of tedious, time-consuming, and costly techniques, which do not allow researchers to procure results in real-time. Therefore, there is a need to test and implement emerging technologies in order to automate and facilitate those analyses within this industry. This paper aimed to present the most recent publications and trends regarding the use of low-cost, reliable, and accurate, remote or non-contact techniques using robotics, machine learning, computer vision, biometrics and the application of artificial intelligence, as well as to identify the research gaps within the beverage industry. It was found that there is a wide opportunity in the development and use of robotics and biometrics for all types of beverages, but especially for hot and non-alcoholic drinks. Furthermore, there is a lack of knowledge and clarity within the industry, and research about the concepts of artificial intelligence and machine learning, as well as that concerning the correct design and interpretation of modeling related to the lack of inclusion of relevant data, additional to presenting over- or under-fitted models.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3144 ◽  
Author(s):  
Sherif Said ◽  
Ilyes Boulkaibet ◽  
Murtaza Sheikh ◽  
Abdullah S. Karar ◽  
Samer Alkork ◽  
...  

In this paper, a customizable wearable 3D-printed bionic arm is designed, fabricated, and optimized for a right arm amputee. An experimental test has been conducted for the user, where control of the artificial bionic hand is accomplished successfully using surface electromyography (sEMG) signals acquired by a multi-channel wearable armband. The 3D-printed bionic arm was designed for the low cost of 295 USD, and was lightweight at 428 g. To facilitate a generic control of the bionic arm, sEMG data were collected for a set of gestures (fist, spread fingers, wave-in, wave-out) from a wide range of participants. The collected data were processed and features related to the gestures were extracted for the purpose of training a classifier. In this study, several classifiers based on neural networks, support vector machine, and decision trees were constructed, trained, and statistically compared. The support vector machine classifier was found to exhibit an 89.93% success rate. Real-time testing of the bionic arm with the optimum classifier is demonstrated.


2020 ◽  
Vol 14 (4) ◽  
pp. 113-133
Author(s):  
Mary Shamala L. ◽  
Zayaraz G. ◽  
Vivekanandan K. ◽  
Vijayalakshmi V.

Internet of things (IoT) is a global network of uniquely addressable interconnected things, based on standard communication protocols. As the number of devices connected to the IoT escalates, they are becoming a likely target for hackers. Also, the limited resources of IoT devices makes the security on top of the actual functionality of the device. Therefore, the cryptographic algorithm for such devices has to be devised as small as possible. To tackle the resource constrained nature of IoT devices, this article presents a lightweight cryptography algorithm based on a single permutation and iterated Even-Mansour construction. The proposed algorithm is implemented in low cost microcontrollers, thus making it suitable for a wide range of IoT nodes.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4372 ◽  
Author(s):  
Yan Naung Soe ◽  
Yaokai Feng ◽  
Paulus Insap Santosa ◽  
Rudy Hartanto ◽  
Kouichi Sakurai

With the rapid development and popularization of Internet of Things (IoT) devices, an increasing number of cyber-attacks are targeting such devices. It was said that most of the attacks in IoT environments are botnet-based attacks. Many security weaknesses still exist on the IoT devices because most of them have not enough memory and computational resource for robust security mechanisms. Moreover, many existing rule-based detection systems can be circumvented by attackers. In this study, we proposed a machine learning (ML)-based botnet attack detection framework with sequential detection architecture. An efficient feature selection approach is adopted to implement a lightweight detection system with a high performance. The overall detection performance achieves around 99% for the botnet attack detection using three different ML algorithms, including artificial neural network (ANN), J48 decision tree, and Naïve Bayes. The experiment result indicates that the proposed architecture can effectively detect botnet-based attacks, and also can be extended with corresponding sub-engines for new kinds of attacks.


Sign in / Sign up

Export Citation Format

Share Document