processing delay
Recently Published Documents


TOTAL DOCUMENTS

92
(FIVE YEARS 29)

H-INDEX

13
(FIVE YEARS 2)

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Yong Wang ◽  
Siyu Tang ◽  
Xiaorong Zhu ◽  
Yonghua Xie

In this paper, we propose a novel multitask scheduling and distributed collaborative computing method for quality of service (QoS) guaranteed delay-sensitive services in the Internet of Things (IoT). First, we propose a multilevel scheduling framework combining the process and thread scheduling for reducing the processing delay of multitype services of a single edge node in IoT, where a preemptive static priority process scheduling algorithm is adopted for different types of services and a dynamic priority-based thread scheduling algorithm is proposed for the same type of services with high concurrency. Furthermore, for reducing the processing delay of computation-intensive services, we propose a distributed task offloading algorithm based on a multiple 0-1 knapsack model with value limitation with the collaboration of multiple edge nodes to minimize the processing delay. Simulation results show that the proposed method can significantly reduce not only the scheduling delay of a large number of time-sensitive services in single edge node but also the process delay of computation-intensive service collaborated by multiple edge nodes.


2021 ◽  
Author(s):  
Danyang Zheng ◽  
Gangxiang Shen ◽  
Xiaojun Cao ◽  
Biswanath Mukherjee

<div>Emerging 5G technologies can significantly reduce end-to-end service latency for applications requiring strict quality of service (QoS). With network function virtualization (NFV), to complete a client’s request from those applications, the client’s data can sequentially go through multiple service functions (SFs) for processing/analysis but introduce additional processing delay. To reduce the processing delay from the serially-running SFs, network function parallelism (NFP) that allows multiple SFs to run in parallel is introduced. In this work, we study how to apply NFP into the SF chaining and embedding process such that the latency, including processing and propagation delays, can be jointly minimized. We introduce a novel augmented graph to address the parallel relationship constraint among the required SFs. Considering parallel relationship constraints, we propose a novel problem called parallelism-aware service function chaining and embedding (PSFCE). For this problem, we propose a near-optimal maximum parallel block gain (MPBG) first optimization algorithm when computing resources at each physical node are enough to host the required SFs. When computing resources are limited, we propose a logarithm-approximate algorithm, called parallelism-aware SFs deployment (PSFD), to jointly optimize processing and propagation delays. We conduct extensive simulations on multiple network scenarios to evaluate the performances of our schemes. Accordingly, we find that (i) MPBG is near-optimal, (ii) the optimization of end-to-end service latency largely depends on the processing delay in small networks and is impacted more by the propagation delay in large networks, and (iii) PSFD outperforms the schemes directly extended from existing works regarding end-to-end latency.</div>


2021 ◽  
Author(s):  
Danyang Zheng ◽  
Gangxiang Shen ◽  
Xiaojun Cao ◽  
Biswanath Mukherjee

<div>Emerging 5G technologies can significantly reduce end-to-end service latency for applications requiring strict quality of service (QoS). With network function virtualization (NFV), to complete a client’s request from those applications, the client’s data can sequentially go through multiple service functions (SFs) for processing/analysis but introduce additional processing delay. To reduce the processing delay from the serially-running SFs, network function parallelism (NFP) that allows multiple SFs to run in parallel is introduced. In this work, we study how to apply NFP into the SF chaining and embedding process such that the latency, including processing and propagation delays, can be jointly minimized. We introduce a novel augmented graph to address the parallel relationship constraint among the required SFs. Considering parallel relationship constraints, we propose a novel problem called parallelism-aware service function chaining and embedding (PSFCE). For this problem, we propose a near-optimal maximum parallel block gain (MPBG) first optimization algorithm when computing resources at each physical node are enough to host the required SFs. When computing resources are limited, we propose a logarithm-approximate algorithm, called parallelism-aware SFs deployment (PSFD), to jointly optimize processing and propagation delays. We conduct extensive simulations on multiple network scenarios to evaluate the performances of our schemes. Accordingly, we find that (i) MPBG is near-optimal, (ii) the optimization of end-to-end service latency largely depends on the processing delay in small networks and is impacted more by the propagation delay in large networks, and (iii) PSFD outperforms the schemes directly extended from existing works regarding end-to-end latency.</div>


IoT ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 549-563
Author(s):  
Andrew John Poulter ◽  
Simon J. Cox

This paper assesses the relative performance of the MQTT protocol in comparison to the Secure Remote Update Protocol (SRUP) in a number of simulated real-world conditions, and describes an experiment that has been conducted to measure the processing delay associated with the use of the more secure protocol. Experimental measurements for power consumption of the devices and the size of comparable TCP packets were also made. Analysis shows that the use of the SRUP protocol added an additional processing delay of between 42.92 ms and 51.60 ms—depending on the specific hardware in use. There was also shown to be a 55.47% increase in power consumption when running the secure SRUP protocol, compared with an MQTT implementation.


Author(s):  
Hiroshi Katada ◽  
Taku Yamazaki ◽  
Takumi Miyoshi ◽  
Shigeru Shimamoto ◽  
Yoshiaki Tanaka
Keyword(s):  

2021 ◽  
Vol 25 ◽  
pp. 233121652110161
Author(s):  
Julian Angermeier ◽  
Werner Hemmert ◽  
Stefan Zirn

Users of a cochlear implant (CI) in one ear, who are provided with a hearing aid (HA) in the contralateral ear, so-called bimodal listeners, are typically affected by a constant and relatively large interaural time delay offset due to differences in signal processing and differences in stimulation. For HA stimulation, the cochlear travelling wave delay is added to the processing delay, while for CI stimulation, the auditory nerve fibers are stimulated directly. In case of MED-EL CI systems in combination with different HA types, the CI stimulation precedes the acoustic HA stimulation by 3 to 10 ms. A self-designed, battery-powered, portable, and programmable delay line was applied to the CI to reduce the device delay mismatch in nine bimodal listeners. We used an A-B-B-A test design and determined if sound source localization improves when the device delay mismatch is reduced by delaying the CI stimulation by the HA processing delay (τHA). Results revealed that every subject in our group of nine bimodal listeners benefited from the approach. The root-mean-square error of sound localization improved significantly from 52.6° to 37.9°. The signed bias also improved significantly from 25.2° to 10.5°, with positive values indicating a bias toward the CI. Furthermore, two other delay values (τHA –1 ms and τHA +1 ms) were applied, and with the latter value, the signed bias was further reduced in some test subjects. We conclude that sound source localization accuracy in bimodal listeners improves instantaneously and sustainably when the device delay mismatch is reduced.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
K. Vidyasankar

A Fog Computing architecture consists of edge nodes that generate and possibly pre-process (sensor) data, fog nodes that do some processing quickly and do any actuations that may be needed, and cloud nodes that may perform further detailed analysis for long-term and archival purposes. Processing of a batch of input data is distributed into sub-computations which are executed at the different nodes of the architecture. In many applications, the computations are expected to preserve the order in which the batches arrive at the sources. In this paper, we discuss mechanisms for performing the computations at a node in correct order, by storing some batches temporarily and/or dropping some batches. The former option causes a delay in processing and the latter option affects Quality of Service (QoS). We bring out the trade-offs between processing delay and storage capabilities of the nodes, and also between QoS and the storage capabilities.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Christiaan E. Lokin ◽  
Daniel Schinkel ◽  
Ronan A.R. Van der Zee ◽  
Bram Nauta

Sign in / Sign up

Export Citation Format

Share Document