data access latency
Recently Published Documents


TOTAL DOCUMENTS

8
(FIVE YEARS 1)

H-INDEX

4
(FIVE YEARS 0)

Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2611 ◽  
Author(s):  
Theofanis Raptis ◽  
Andrea Passarella ◽  
Marco Conti

Maintaining critical data access latency requirements is an important challenge of Industry 4.0. The traditional, centralized industrial networks, which transfer the data to a central network controller prior to delivery, might be incapable of meeting such strict requirements. In this paper, we exploit distributed data management to overcome this issue. Given a set of data, the set of consumer nodes and the maximum access latency that consumers can tolerate, we consider a method for identifying and selecting a limited set of proxies in the network where data needed by the consumer nodes can be cached. The method targets at balancing two requirements; data access latency within the given constraints and low numbers of selected proxies. We implement the method and evaluate its performance using a network of WSN430 IEEE 802.15.4-enabled open nodes. Additionally, we validate a simulation model and use it for performance evaluation in larger scales and more general topologies. We demonstrate that the proposed method (i) guarantees average access latency below the given threshold and (ii) outperforms traditional centralized and even distributed approaches.


2017 ◽  
Vol 2 (2) ◽  
pp. 154-166 ◽  
Author(s):  
Tahir Maqsood ◽  
Nikos Tziritas ◽  
Thanasis Loukopoulos ◽  
Sajjad A. Madani ◽  
Samee U. Khan ◽  
...  

2013 ◽  
Vol 21 (3-4) ◽  
pp. 123-136 ◽  
Author(s):  
Stephen L. Olivier ◽  
Bronis R. de Supinski ◽  
Martin Schulz ◽  
Jan F. Prins

Task parallelism raises the level of abstraction in shared memory parallel programming to simplify the development of complex applications. However, task parallel applications can exhibit poor performance due to thread idleness, scheduling overheads, andwork time inflation– additional time spent by threads in a multithreaded computation beyond the time required to perform the same work in a sequential computation. We identify the contributions of each factor to lost efficiency in various task parallel OpenMP applications and diagnose the causes of work time inflation in those applications. Increased data access latency can cause significant work time inflation in NUMA systems. Our locality framework for task parallel OpenMP programs mitigates this cause of work time inflation. Our extensions to the Qthreads library demonstrate that locality-aware scheduling can improve performance up to 3X compared to the Intel OpenMP task scheduler.


2010 ◽  
Author(s):  
Javier Bueno ◽  
Xavier Martorell ◽  
Juan José Costa ◽  
Toni Cortés ◽  
Eduard Ayguadé ◽  
...  

Author(s):  
William Y. Chen ◽  
Scott A. Mahlke ◽  
Wen-mei W. Hwu ◽  
Tokuzo Kiyohara ◽  
Pohua P. Chang

Sign in / Sign up

Export Citation Format

Share Document