iot devices
Recently Published Documents


TOTAL DOCUMENTS

3911
(FIVE YEARS 3701)

H-INDEX

35
(FIVE YEARS 32)

2022 ◽  
Vol 18 (2) ◽  
pp. 1-25
Author(s):  
Jing Li ◽  
Weifa Liang ◽  
Zichuan Xu ◽  
Xiaohua Jia ◽  
Wanlei Zhou

We are embracing an era of Internet of Things (IoT). The latency brought by unstable wireless networks caused by limited resources of IoT devices seriously impacts the quality of services of users, particularly the service delay they experienced. Mobile Edge Computing (MEC) technology provides promising solutions to delay-sensitive IoT applications, where cloudlets (edge servers) are co-located with wireless access points in the proximity of IoT devices. The service response latency for IoT applications can be significantly shortened due to that their data processing can be performed in a local MEC network. Meanwhile, most IoT applications usually impose Service Function Chain (SFC) enforcement on their data transmission, where each data packet from its source gateway of an IoT device to the destination (a cloudlet) of the IoT application must pass through each Virtual Network Function (VNF) in the SFC in an MEC network. However, little attention has been paid on such a service provisioning of multi-source IoT applications in an MEC network with SFC enforcement. In this article, we study service provisioning in an MEC network for multi-source IoT applications with SFC requirements and aiming at minimizing the cost of such service provisioning, where each IoT application has multiple data streams from different sources to be uploaded to a location (cloudlet) in the MEC network for aggregation, processing, and storage purposes. To this end, we first formulate two novel optimization problems: the cost minimization problem of service provisioning for a single multi-source IoT application, and the service provisioning problem for a set of multi-source IoT applications, respectively, and show that both problems are NP-hard. Second, we propose a service provisioning framework in the MEC network for multi-source IoT applications that consists of uploading stream data from multiple sources of the IoT application to the MEC network, data stream aggregation and routing through the VNF instance placement and sharing, and workload balancing among cloudlets. Third, we devise an efficient algorithm for the cost minimization problem built upon the proposed service provisioning framework, and further extend the solution for the service provisioning problem of a set of multi-source IoT applications. We finally evaluate the performance of the proposed algorithms through experimental simulations. Simulation results demonstrate that the proposed algorithms are promising.


2022 ◽  
Vol 22 (1) ◽  
pp. 1-22
Author(s):  
Yanchen Qiao ◽  
Weizhe Zhang ◽  
Xiaojiang Du ◽  
Mohsen Guizani

With the construction of smart cities, the number of Internet of Things (IoT) devices is growing rapidly, leading to an explosive growth of malware designed for IoT devices. These malware pose a serious threat to the security of IoT devices. The traditional malware classification methods mainly rely on feature engineering. To improve accuracy, a large number of different types of features will be extracted from malware files in these methods. That brings a high complexity to the classification. To solve these issues, a malware classification method based on Word2Vec and Multilayer Perception (MLP) is proposed in this article. First, for one malware sample, Word2Vec is used to calculate a word vector for all bytes of the binary file and all instructions in the assembly file. Second, we combine these vectors into a 256x256x2-dimensional matrix. Finally, we designed a deep learning network structure based on MLP to train the model. Then the model is used to classify the testing samples. The experimental results prove that the method has a high accuracy of 99.54%.


2022 ◽  
Vol 27 (2) ◽  
pp. 1-16
Author(s):  
Ming Han ◽  
Ye Wang ◽  
Jian Dong ◽  
Gang Qu

One major challenge in deploying Deep Neural Network (DNN) in resource-constrained applications, such as edge nodes, mobile embedded systems, and IoT devices, is its high energy cost. The emerging approximate computing methodology can effectively reduce the energy consumption during the computing process in DNN. However, a recent study shows that the weight storage and access operations can dominate DNN's energy consumption due to the fact that the huge size of DNN weights must be stored in the high-energy-cost DRAM. In this paper, we propose Double-Shift, a low-power DNN weight storage and access framework, to solve this problem. Enabled by approximate decomposition and quantization, Double-Shift can reduce the data size of the weights effectively. By designing a novel weight storage allocation strategy, Double-Shift can boost the energy efficiency by trading the energy consuming weight storage and access operations for low-energy-cost computations. Our experimental results show that Double-Shift can reduce DNN weights to 3.96%–6.38% of the original size and achieve an energy saving of 86.47%–93.62%, while introducing a DNN classification error within 2%.


2022 ◽  
Vol 54 (8) ◽  
pp. 1-36
Author(s):  
Satyaki Roy ◽  
Preetam Ghosh ◽  
Nirnay Ghosh ◽  
Sajal K. Das

The advent of the edge computing network paradigm places the computational and storage resources away from the data centers and closer to the edge of the network largely comprising the heterogeneous IoT devices collecting huge volumes of data. This paradigm has led to considerable improvement in network latency and bandwidth usage over the traditional cloud-centric paradigm. However, the next generation networks continue to be stymied by their inability to achieve adaptive, energy-efficient, timely data transfer in a dynamic and failure-prone environment—the very optimization challenges that are dealt with by biological networks as a consequence of millions of years of evolution. The transcriptional regulatory network (TRN) is a biological network whose innate topological robustness is a function of its underlying graph topology. In this article, we survey these properties of TRN and the metrics derived therefrom that lend themselves to the design of smart networking protocols and architectures. We then review a body of literature on bio-inspired networking solutions that leverage the stated properties of TRN. Finally, we present a vision for specific aspects of TRNs that may inspire future research directions in the fields of large-scale social and communication networks.


2022 ◽  
Vol 25 (1) ◽  
pp. 1-36
Author(s):  
Savvas Savvides ◽  
Seema Kumar ◽  
Julian James Stephen ◽  
Patrick Eugster

With the advent of the Internet of things (IoT), billions of devices are expected to continuously collect and process sensitive data (e.g., location, personal health factors). Due to the limited computational capacity available on IoT devices, the current de facto model for building IoT applications is to send the gathered data to the cloud for computation. While building private cloud infrastructures for handling large amounts of data streams can be expensive, using low-cost public (untrusted) cloud infrastructures for processing continuous queries including sensitive data leads to strong concerns over data confidentiality. This article presents C3PO, a confidentiality-preserving, continuous query processing engine, that leverages the public cloud. The key idea is to intelligently utilize partially homomorphic and property-preserving encryption to perform as many computationally intensive operations as possible—without revealing plaintext—in the untrusted cloud. C3PO provides simple abstractions to the developer to hide the complexities of applying complex cryptographic primitives, reasoning about the performance of such primitives, deciding which computations can be executed in an untrusted tier, and optimizing cloud resource usage. An empirical evaluation with several benchmarks and case studies shows the feasibility of our approach. We consider different classes of IoT devices that differ in their computational and memory resources (from a Raspberry Pi 3 to a very small device with a Cortex-M3 microprocessor) and through the use of optimizations, we demonstrate the feasibility of using partially homomorphic and property-preserving encryption on IoT devices.


2022 ◽  
Vol 18 (2) ◽  
pp. 1-21
Author(s):  
Yubo Yan ◽  
Panlong Yang ◽  
Jie Xiong ◽  
Xiang-Yang Li

The global IoT market is experiencing a fast growth with a massive number of IoT/wearable devices deployed around us and even on our bodies. This trend incorporates more users to upload data frequently and timely to the APs. Previous work mainly focus on improving the up-link throughput. However, incorporating more users to transmit concurrently is actually more important than improving the throughout for each individual user, as the IoT devices may not require very high transmission rates but the number of devices is usually large. In the current state-of-the-arts (up-link MU-MIMO), the number of transmissions is either confined to no more than the number of antennas (node-degree-of-freedom, node-DoF) at an AP or clock synchronized with cables between APs to support more concurrent transmissions. However, synchronized APs still incur a very high collaboration overhead, prohibiting its real-life adoption. We thus propose novel schemes to remove the cable-synchronization constraint while still being able to support more concurrent users than the node-DoF limit, and at the same time minimize the collaboration overhead. In this paper, we design, implement, and experimentally evaluate OpenCarrier, the first distributed system to break the user limitation for up-link MU-MIMO networks with coordinated APs. Our experiments demonstrate that OpenCarrier is able to support up to five up-link high-throughput transmissions for MU-MIMO network with 2-antenna APs.


2022 ◽  
Vol 3 (1) ◽  
pp. 1-23
Author(s):  
Mao V. Ngo ◽  
Tie Luo ◽  
Tony Q. S. Quek

The advances in deep neural networks (DNN) have significantly enhanced real-time detection of anomalous data in IoT applications. However, the complexity-accuracy-delay dilemma persists: Complex DNN models offer higher accuracy, but typical IoT devices can barely afford the computation load, and the remedy of offloading the load to the cloud incurs long delay. In this article, we address this challenge by proposing an adaptive anomaly detection scheme with hierarchical edge computing (HEC). Specifically, we first construct multiple anomaly detection DNN models with increasing complexity and associate each of them to a corresponding HEC layer. Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network . We also incorporate a parallelism policy training method to accelerate the training process by taking advantage of distributed models. We build an HEC testbed using real IoT devices and implement and evaluate our contextual-bandit approach with both univariate and multivariate IoT datasets. In comparison with both baseline and state-of-the-art schemes, our adaptive approach strikes the best accuracy-delay tradeoff on the univariate dataset and achieves the best accuracy and F1-score on the multivariate dataset with only negligibly longer delay than the best (but inflexible) scheme.


2022 ◽  
Vol 54 (9) ◽  
pp. 1-35
Author(s):  
Ismaeel Al Ridhawi ◽  
Ouns Bouachir ◽  
Moayad Aloqaily ◽  
Azzedine Boukerche

Internet of Things (IoT) systems have advanced greatly in the past few years, especially with the support of Machine Learning (ML) and Artificial Intelligence (AI) solutions. Numerous AI-supported IoT devices are playing a significant role in providing complex and user-specific smart city services. Given the multitude of heterogeneous wireless networks, the plethora of computer and storage architectures and paradigms, and the abundance of mobile and vehicular IoT devices, true smart city experiences are only attainable through a cooperative intelligent and secure IoT framework. This article provides an extensive study on different cooperative systems and envisions a cooperative solution that supports the integration and collaboration among both centralized and distributed systems, in which intelligent AI-supported IoT devices such as smart UAVs provide support in the data collection, processing and service provisioning process. Moreover, secure and collaborative decentralized solutions such as Blockchain are considered in the service provisioning process to enable enhanced privacy and authentication features for IoT applications. As such, user-specific complex services and applications within smart city environments will be delivered and made available in a timely, secure, and efficient manner.


2022 ◽  
Vol 54 (9) ◽  
pp. 1-36
Author(s):  
Konstantinos Arakadakis ◽  
Pavlos Charalampidis ◽  
Antonis Makrogiannakis ◽  
Alexandros Fragkiadakis

The devices forming Internet of Things (IoT) networks need to be re-programmed over the air, so that new features are added, software bugs or security vulnerabilities are resolved, and their applications can be re-purposed. The limitations of IoT devices, such as installation in locations with limited physical access, resource-constrained nature, large scale, and high heterogeneity, should be taken into consideration for designing an efficient and reliable pipeline for over-the-air programming (OTAP). In this work, we present a survey of OTAP techniques, which can be applied to IoT networks. We highlight the main challenges and limitations of OTAP for IoT devices and analyze the essential steps of the firmware update process, along with different approaches and techniques that implement them. In addition, we discuss schemes that focus on securing the OTAP process. Finally, we present a collection of state-of-the-art open-source and commercial platforms that integrate secure and reliable OTAP.


Sign in / Sign up

Export Citation Format

Share Document