network latency
Recently Published Documents


TOTAL DOCUMENTS

170
(FIVE YEARS 60)

H-INDEX

13
(FIVE YEARS 3)

Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6741
Author(s):  
Mohamad Rida Mortada ◽  
Abbass Nasser ◽  
Ali Mansour ◽  
Koffi-Clément Yao

In cognitive radio wireless sensor networks (CRSN), the nodes act as secondary users. Therefore, they can access a channel whenever its primary user (PU) is absent. Thus, the nodes are assumed to be equipped with a spectrum sensing (SS) module to monitor the PU activity. In this manuscript, we focus on a clustered CRSN, where the cluster head (CH) performs SS, gathers the data, and sends it toward a central base station by adopting an ad hoc topology with in-network data aggregation (IDA) capability. In such networks, when the number of clusters increases, the consumed energy by the data transmission decreases, while the total consumed energy of SS increases, since more CHs need to perform SS before transmitting. The effect of IDA on CRSN performance is investigated in this manuscript. To select the best number of clusters, a study is derived aiming to extend the network lifespan, taking the SS requirements, the IDA effect, and the energy consumed by both SS and transmission into consideration. Furthermore, the collision rate between primary and secondary transmissions and the network latency are theoretically derived. Numerical results corroborate the efficiency of IDA to extend the network lifespan and minimize both the collision rate and the network latency.


2021 ◽  
Author(s):  
Michael Enbibel

This research is done for optimizing telemedicine framework by using fogging or fog computing for smart healthcare systems. Fog computing is used to solve the issues that arise on telemedicine framework of smart healthcare system like Infrastructural, Implementation, Acceptance, Data Management, Security, Bottleneck system organization, and Network latency Issues. we mainly used Distributed Data Flow (DDF) method using fog computing in order to fully solve the listed issues.


2021 ◽  
Author(s):  
Michael Enbibel

This research is done for optimizing telemedicine framework by using fogging or fog computing for smart healthcare systems. Fog computing is used to solve the issues that arise on telemedicine framework of smart healthcare system like Infrastructural, Implementation, Acceptance, Data Management, Security, Bottleneck system organization, and Network latency Issues. we mainly used Distributed Data Flow (DDF) method using fog computing in order to fully solve the listed issues.


2021 ◽  
Author(s):  
Michael Enbibel

This research is done for optimizing telemedicine framework by using fogging or fog computing for smart healthcare systems. Fog computing is used to solve the issues that arise on telemedicine framework of smart healthcare system like Infrastructural, Implementation, Acceptance, Data Management, Security, Bottleneck system organization, and Network latency Issues. we mainly used Distributed Data Flow (DDF) method using fog computing in order to fully solve the listed issues.


Author(s):  
Hongsheng Yang ◽  
Jianwei Liu

In Mobile Crowdsensing Scenario (MCS), most of the mobile devices transfer data to each other relay on encounter opportunities. MCS energy consumption and latency are the key indicators of networks under the application scenarios. On one hand, the neighbor idle scanning and listening mechanism of mobile devices usually consumes the energy that could be saved. Therefore, keeping devices to work in a low duty cycle can avoid this part of energy waste effectively, but it will bring serious network latency. Aim to this, the duty cycle strategy with a lower latency strategy is focused to discuss in this paper. A method, named Low latency Duty Cycle (DC) with MSFO, is proposed to reduce network latency which mainly compares the size of data packets to be transmitted by the device. Besides, small data packets have priority in the transmission queue for enhanced network performances. Extensive simulation results show that the proposed method can significantly reduce network latency in terms of MCS with a duty cycle strategy.


2021 ◽  
Author(s):  
Yuechen Chen ◽  
Shanshan Liu ◽  
Fabrizio Lombardi ◽  
Ahmed Louri

Approximation is an effective technique for reducing power consumption and latency of on-chip communication in many computing applications. However, existing approximation techniques either achieve modest improvements in these metrics or require retraining after approximation, such when convolutional neural networks (CNNs) are employed. Since classifying many images introduces intensive on-chip communication, reductions in both network latency and power consumption are highly desired. In this paper, we propose an approximate communication technique (ACT) to improve the efficiency of on-chip communications for image classification applications. The proposed technique exploits the error-tolerance of the image classification process to reduce power consumption and latency of on-chip communications, resulting in better overall performance for image classification computation. This is achieved by incorporating novel quality control and data approximation mechanisms that reduce the packet size. In particular, the proposed quality control mechanisms identify the error-resilient variables and automatically adjust the error thresholds of the variables based on the image classification accuracy. The proposed data approximation mechanisms significantly reduce packet size when the variables are transmitted. The proposed technique reduces the number of flits in each data packet as well as the on-chip communication, while maintaining an excellent image classification accuracy. The cycle-accurate simulation results show that ACT achieves 23% in network latency reduction and 24% in dynamic power reduction compared to the existing approximate communication technique with less than 0.99% classification accuracy loss.


2021 ◽  
Author(s):  
Yuechen Chen ◽  
Shanshan Liu ◽  
Fabrizio Lombardi ◽  
Ahmed Louri

Approximation is an effective technique for reducing power consumption and latency of on-chip communication in many computing applications. However, existing approximation techniques either achieve modest improvements in these metrics or require retraining after approximation, such when convolutional neural networks (CNNs) are employed. Since classifying many images introduces intensive on-chip communication, reductions in both network latency and power consumption are highly desired. In this paper, we propose an approximate communication technique (ACT) to improve the efficiency of on-chip communications for image classification applications. The proposed technique exploits the error-tolerance of the image classification process to reduce power consumption and latency of on-chip communications, resulting in better overall performance for image classification computation. This is achieved by incorporating novel quality control and data approximation mechanisms that reduce the packet size. In particular, the proposed quality control mechanisms identify the error-resilient variables and automatically adjust the error thresholds of the variables based on the image classification accuracy. The proposed data approximation mechanisms significantly reduce packet size when the variables are transmitted. The proposed technique reduces the number of flits in each data packet as well as the on-chip communication, while maintaining an excellent image classification accuracy. The cycle-accurate simulation results show that ACT achieves 23% in network latency reduction and 24% in dynamic power reduction compared to the existing approximate communication technique with less than 0.99% classification accuracy loss.


Electronics ◽  
2021 ◽  
Vol 10 (17) ◽  
pp. 2098
Author(s):  
Mumraiz Khan Kasi ◽  
Sarah Abu Ghazalah ◽  
Raja Naeem Akram ◽  
Damien Sauveron

Mobile edge computing is capable of providing high data processing capabilities while ensuring low latency constraints of low power wireless networks, such as the industrial internet of things. However, optimally placing edge servers (providing storage and computation services to user equipment) is still a challenge. To optimally place mobile edge servers in a wireless network, such that network latency is minimized and load balancing is performed on edge servers, we propose a multi-agent reinforcement learning (RL) solution to solve a formulated mobile edge server placement problem. The RL agents are designed to learn the dynamics of the environment and adapt a joint action policy resulting in the minimization of network latency and balancing the load on edge servers. To ensure that the action policy adapted by RL agents maximized the overall network performance indicators, we propose the sharing of information, such as the latency experienced from each server and the load of each server to other RL agents in the network. Experiment results are obtained to analyze the effectiveness of the proposed solution. Although the sharing of information makes the proposed solution obtain a network-wide maximation of overall network performance at the same time it makes it susceptible to different kinds of security attacks. To further investigate the security issues arising from the proposed solution, we provide a detailed analysis of the types of security attacks possible and their countermeasures.


2021 ◽  
Author(s):  
Barry Jenkins ◽  
Francois Malassenet ◽  
John Scott ◽  
Kshitij Patel

<p>We describe a new method of game streaming designed to address the limitations of video-based cloud gaming and graphics API command streaming approaches. The method is implemented as a game-engine protocol and plug-in called GPEG (Geometry Pump Engine Group). GPEG streams the game engine content as sub-assets, not video of gameplay or graphics API commands. GPEG encoding uses unique pre-processing algorithms which intelligently subdivide textured mesh assets into much smaller sub-assets based on their potential geometric and perceptual visibility. A thin, CPU-only, server software process interactively streams the pre-encoded sub-asset packets to the client-side game engine using navigation-driven, visibility-based prefetch. Like command streaming, the method does not require a GPU on the server. By using predictive prefetch, the method overcomes network latency. In contrast to progressive download, the method is a true content stream, with adaptive scalability and intelligent caching that together enable interactive game engine content streaming over broadband.<b></b></p>


Sign in / Sign up

Export Citation Format

Share Document