communication overhead
Recently Published Documents


TOTAL DOCUMENTS

583
(FIVE YEARS 244)

H-INDEX

18
(FIVE YEARS 5)

2022 ◽  
Vol 22 (3) ◽  
pp. 1-22
Author(s):  
Yi Liu ◽  
Ruihui Zhao ◽  
Jiawen Kang ◽  
Abdulsalam Yassine ◽  
Dusit Niyato ◽  
...  

Federated Edge Learning (FEL) allows edge nodes to train a global deep learning model collaboratively for edge computing in the Industrial Internet of Things (IIoT), which significantly promotes the development of Industrial 4.0. However, FEL faces two critical challenges: communication overhead and data privacy. FEL suffers from expensive communication overhead when training large-scale multi-node models. Furthermore, due to the vulnerability of FEL to gradient leakage and label-flipping attacks, the training process of the global model is easily compromised by adversaries. To address these challenges, we propose a communication-efficient and privacy-enhanced asynchronous FEL framework for edge computing in IIoT. First, we introduce an asynchronous model update scheme to reduce the computation time that edge nodes wait for global model aggregation. Second, we propose an asynchronous local differential privacy mechanism, which improves communication efficiency and mitigates gradient leakage attacks by adding well-designed noise to the gradients of edge nodes. Third, we design a cloud-side malicious node detection mechanism to detect malicious nodes by testing the local model quality. Such a mechanism can avoid malicious nodes participating in training to mitigate label-flipping attacks. Extensive experimental studies on two real-world datasets demonstrate that the proposed framework can not only improve communication efficiency but also mitigate malicious attacks while its accuracy is comparable to traditional FEL frameworks.


2022 ◽  
Vol 18 (2) ◽  
pp. 1-23
Author(s):  
Junyang Shi ◽  
Xingjian Chen ◽  
Mo Sha

IEEE 802.15.4-based wireless sensor-actuator networks have been widely adopted by process industries in recent years because of their significant role in improving industrial efficiency and reducing operating costs. Today, industrial wireless sensor-actuator networks are becoming tremendously larger and more complex than before. However, a large, complex mesh network is hard to manage and inelastic to change once the network is deployed. In addition, flooding-based time synchronization and information dissemination introduce significant communication overhead to the network. More importantly, the deliveries of urgent and critical information such as emergency alarms suffer long delays, because those messages must go through the hop-by-hop transport. A promising solution to overcome those limitations is to enable the direct messaging from a long-range radio to an IEEE 802.15.4 radio. Then messages can be delivered to all field devices in a single-hop fashion. This article presents our study on enabling the cross-technology communication from LoRa to ZigBee using the energy emission of the LoRa radio as the carrier to deliver information. Experimental results show that our cross-technology communication approach provides reliable communication from LoRa to ZigBee with the throughput of up to 576.80 bps and the bit error rate of up to 5.23% in the 2.4 GHz band.


2022 ◽  
Vol 16 (4) ◽  
pp. 1-43
Author(s):  
Xu Yang ◽  
Chao Song ◽  
Mengdi Yu ◽  
Jiqing Gu ◽  
Ming Liu

Recently, the counting algorithm of local topology structures, such as triangles, has been widely used in social network analysis, recommendation systems, user portraits and other fields. At present, the problem of counting global and local triangles in a graph stream has been widely studied, and numerous triangle counting steaming algorithms have emerged. To improve the throughput and scalability of streaming algorithms, many researches of distributed streaming algorithms on multiple machines are studied. In this article, we first propose a framework of distributed streaming algorithm based on the Master-Worker-Aggregator architecture. The two core parts of this framework are an edge distribution strategy, which plays a key role to affect the performance, including the communication overhead and workload balance, and aggregation method, which is critical to obtain the unbiased estimations of the global and local triangle counts in a graph stream. Then, we extend the state-of-the-art centralized algorithm TRIÈST into four distributed algorithms under our framework. Compared to their competitors, experimental results show that DVHT-i is excellent in accuracy and speed, performing better than the best existing distributed streaming algorithm. DEHT-b is the fastest algorithm and has the least communication overhead. What’s more, it almost achieves absolute workload balance.


Author(s):  
Tarasvi Lakum ◽  
Barige Thirumala Rao

<p><span>In this paper, we are proposing a mutual query data sharing protocol (MQDS) to overcome the encryption or decryption time limitations of exiting protocols like Boneh, rivest shamir adleman (RSA), Multi-bit transposed ring learning parity with noise (TRLPN), ring learning parity with noise (Ring-LPN) cryptosystem, key-Ordered decisional learning parity with noise (kO-DLPN), and KD_CS protocol’s. Titled scheme is to provide the security for the authenticated user data among the distributed physical users and devices. The proposed data sharing protocol is designed to resist the chosen-ciphertext attack (CCA) under the hardness solution for the query shared-strong diffie-hellman (SDH) problem. The evaluation of proposed work with the existing data sharing protocols in computational and communication overhead through their response time is evaluated.</span></p>


2022 ◽  
Author(s):  
Chandan Kumar Sheemar ◽  
Dirk Slock

This paper presents two novel hybrid beamforming (HYBF) designs for a multi-cell massive multiple-input-multiple-output (mMIMO) millimeter wave (mmWave) full duplex (FD) system under limited dynamic range (LDR). Firstly, we present a novel centralized HYBF (C-HYBF) scheme based on alternating optimization. In general, the complexity of C-HYBF schemes scales quadratically as a function of the number of users and cells, which may limit their scalability. Moreover, they require significant communication overhead to transfer complete channel state information (CSI) to the central node every channel coherence time for optimization. The central node also requires very high computational power to jointly optimize many variables for the uplink (UL) and downlink (DL) users in FD systems. To overcome these drawbacks, we propose a very low-complexity and scalable cooperative per-link parallel and distributed (P$\&$D)-HYBF scheme. It allows each mmWave FD base station (BS) to update the beamformers for its users in a distributed fashion and independently in parallel on different computational processors. The complexity of P$\&$D-HYBF scales only linearly as the network size grows, making it desirable for the next generation of large and dense mmWave FD networks. Simulation results show that both designs significantly outperform the fully digital half duplex (HD) system with only a few radio-frequency (RF) chains and achieve similar performance. <br>


Author(s):  
Ghassen Ben Brahim ◽  
Nazeeruddin Mohammad ◽  
Wassim El-Hajj ◽  
Gerard Parr ◽  
Bryan Scotney

AbstractA critical requirement in Mobile Ad Hoc Networks (MANETs) is its ability to automatically discover existing services as well as their locations. Several solutions have been proposed in various communication domains which could be classified into two categories: (1) directory based, and (2) directory-less. The former is efficient but suffers from the amount of control messages being exchanged to maintain all directories in an agile environment. However, the latter approach attempts to reduce the amount of control messages to update directories, by simply sending broadcast messages to discover services; which is also a non-desirable approach in MANETs. This research work builds on top of our prior work (Nazeeruddin et al. in IFIP/IEEE international conference on management of multimedia networks and services, Springer, Berlin, 2006)) where we introduced a new efficient protocol for service discovery in MANETs (MSLD); a lightweight, robust, scalable, and flexible protocol which supports node heterogeneity and dynamically adapts to network changes while not flooding the network with extra protocol messages—a major challenge in today’s network environments, such as Internet of Things (IoT). Extensive simulations study was conducted on MSLD to: (1) initially evaluate its performance in terms of latency, service availability, and overhead messages, then (2) compare its performance to Dir-Based, Dir-less, and PDP protocols under various network conditions. For most performance metrics, simulation results show that MSLD outperforms Dir-Based, Dir-less, and PDP by either matching or achieving high service availability, low service discovery latency, and considerably less communication overhead.


2022 ◽  
Author(s):  
Song Tang ◽  
Zhiqiang Wang ◽  
Jian Jiang ◽  
Suli Ge ◽  
GaiFang Tan

Abstract With the continuous development of blockchain technology and the emergence of application scenarios, consensus algorithms are still the bottleneck restricting the number of network nodes and data writing efficiency that blockchain can support. How to improve the performance of alliance blockchains safely and efficiently has become an urgent problem to be solved at present. For the practical Byzantine fault tolerance algorithm (PBFT) commonly used in alliance blockchains, there are some problems, such as large communication overhead, simple selection of master nodes, and inability to expand and exit nodes dynamically in the network. This paper proposes an improved algorithm tPBFT (trust-based practical Byzantine algorithm), which is suitable for the high-frequency transaction scenario of alliance chains and introduces a trust interest scoring mechanism between network nodes to adjust the list of consensus nodes dynamically, simplify the PBFT consensus process and reduce the interaction overhead between network nodes. Theoretical analysis and experiments show that the improved tPBFT algorithm can effectively reduce the amount of information interaction between nodes, improve consensus efficiency and support more network nodes.


Electronics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 157
Author(s):  
Nirmala Devi Kathamuthu ◽  
Annadurai Chinnamuthu ◽  
Nelson Iruthayanathan ◽  
Manikandan Ramachandran ◽  
Amir H. Gandomi

The healthcare industry is being transformed by the Internet of Things (IoT), as it provides wide connectivity among physicians, medical devices, clinical and nursing staff, and patients to simplify the task of real-time monitoring. As the network is vast and heterogeneous, opportunities and challenges are presented in gathering and sharing information. Focusing on patient information such as health status, medical devices used by such patients must be protected to ensure safety and privacy. Healthcare information is confidentially shared among experts for analyzing healthcare and to provide treatment on time for patients. Cryptographic and biometric systems are widely used, including deep-learning (DL) techniques to authenticate and detect anomalies, andprovide security for medical systems. As sensors in the network are energy-restricted devices, security and efficiency must be balanced, which is the most important concept to be considered while deploying a security system based on deep-learning approaches. Hence, in this work, an innovative framework, the deep Q-learning-based neural network with privacy preservation method (DQ-NNPP), was designed to protect data transmission from external threats with less encryption and decryption time. This method is used to process patient data, which reduces network traffic. This process also reduces the cost and error of communication. Comparatively, the proposed model outperformed some standard approaches, such as thesecure and anonymous biometric based user authentication scheme (SAB-UAS), MSCryptoNet, and privacy-preserving disease prediction (PPDP). Specifically, the proposed method achieved accuracy of 93.74%, sensitivity of 92%, specificity of 92.1%, communication overhead of 67.08%, 58.72 ms encryption time, and 62.72 ms decryption time.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Qiang Yang ◽  
Daofeng Li

Digital signatures are crucial network security technologies. However, in traditional public key signature schemes, the certificate management is complicated and the schemes are vulnerable to public key replacement attacks. In order to solve the problems, in this paper, we propose a self-certified signature scheme over lattice. Using the self-certified public key, our scheme allows a user to certify the public key without an extra certificate. It can reduce the communication overhead and computational cost of the signature scheme. Moreover, the lattice helps prevent quantum computing attacks. Then, based on the small integer solution problem, our scheme is provable secure in the random oracle model. Furthermore, compared with the previous self-certified signature schemes, our scheme is more secure.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jyothi N. ◽  
Rekha Patil

Purpose This study aims to develop a trust mechanism in a Vehicular ad hoc Network (VANET) based on an optimized deep learning for selfish node detection. Design/methodology/approach The authors built a deep learning-based optimized trust mechanism that removes malicious content generated by selfish VANET nodes. This deep learning-based optimized trust framework is the combination of the Deep Belief Network-based Red Fox Optimization algorithm. A novel deep learning-based optimized model is developed to identify the type of vehicle in the non-line of sight (nLoS) condition. This authentication scheme satisfies both the security and privacy goals of the VANET environment. The message authenticity and integrity are verified using the vehicle location to determine the trust level. The location is verified via distance and time. It identifies whether the sender is in its actual location based on the time and distance. Findings A deep learning-based optimized Trust model is used to detect the obstacles that are present in both the line of sight and nLoS conditions to reduce the accident rate. While compared to the previous methods, the experimental results outperform better prediction results in terms of accuracy, precision, recall, computational cost and communication overhead. Practical implications The experiments are conducted using the Network Simulator Version 2 simulator and evaluated using different performance metrics including computational cost, accuracy, precision, recall and communication overhead with simple attack and opinion tampering attack. However, the proposed method provided better prediction results in terms of computational cost, accuracy, precision, recall, and communication overhead than other existing methods, such as K-nearest neighbor and Artificial Neural Network. Hence, the proposed method highly against the simple attack and opinion tampering attacks. Originality/value This paper proposed a deep learning-based optimized Trust framework for trust prediction in VANET. A deep learning-based optimized Trust model is used to evaluate both event message senders and event message integrity and accuracy.


Sign in / Sign up

Export Citation Format

Share Document