scholarly journals Predicting Bandwidth Utilization on Network Links Using Machine Learning

Author(s):  
Maxime Labonne ◽  
Charalampos Chatzinakis ◽  
Alexis Olivereau
2020 ◽  
Author(s):  
Riya Tapwal ◽  
Nitin Gupta ◽  
Qin Xin

<div>IoT devices (wireless sensors, actuators, computer devices) produce large volume and variety of data and the data</div><div>produced by the IoT devices are transient. In order to overcome the problem of traditional IoT architecture where</div><div>data is sent to the cloud for processing, an emerging technology known as fog computing is proposed recently.</div><div>Fog computing brings storage, computing and control near to the end devices. Fog computing complements the</div><div>cloud and provide services to the IoT devices. Hence, data used by the IoT devices must be cached at the fog nodes</div><div>in order to reduce the bandwidth utilization and latency. This chapter discusses the utility of data caching at the</div><div>fog nodes. Further, various machine learning techniques can be used to reduce the latency by caching the data</div><div>near to the IoT devices by predicting their future demands. Therefore, this chapter also discusses various machine</div><div>learning techniques that can be used to extract the accurate data and predict future requests of IoT devices.</div>


2020 ◽  
Author(s):  
Riya Tapwal ◽  
Nitin Gupta ◽  
Qin Xin

<div>IoT devices (wireless sensors, actuators, computer devices) produce large volume and variety of data and the data</div><div>produced by the IoT devices are transient. In order to overcome the problem of traditional IoT architecture where</div><div>data is sent to the cloud for processing, an emerging technology known as fog computing is proposed recently.</div><div>Fog computing brings storage, computing and control near to the end devices. Fog computing complements the</div><div>cloud and provide services to the IoT devices. Hence, data used by the IoT devices must be cached at the fog nodes</div><div>in order to reduce the bandwidth utilization and latency. This chapter discusses the utility of data caching at the</div><div>fog nodes. Further, various machine learning techniques can be used to reduce the latency by caching the data</div><div>near to the IoT devices by predicting their future demands. Therefore, this chapter also discusses various machine</div><div>learning techniques that can be used to extract the accurate data and predict future requests of IoT devices.</div>


2019 ◽  
Vol 29 (12) ◽  
pp. 121104 ◽  
Author(s):  
Amitava Banerjee ◽  
Jaideep Pathak ◽  
Rajarshi Roy ◽  
Juan G. Restrepo ◽  
Edward Ott

2021 ◽  
Vol 18 (4) ◽  
pp. 1-24
Author(s):  
Sriseshan Srikanth ◽  
Anirudh Jain ◽  
Thomas M. Conte ◽  
Erik P. Debenedictis ◽  
Jeanine Cook

Sparse data applications have irregular access patterns that stymie modern memory architectures. Although hyper-sparse workloads have received considerable attention in the past, moderately-sparse workloads prevalent in machine learning applications, graph processing and HPC have not. Where the former can bypass the cache hierarchy, the latter fit in the cache. This article makes the observation that intelligent, near-processor cache management can improve bandwidth utilization for data-irregular accesses, thereby accelerating moderately-sparse workloads. We propose SortCache, a processor-centric approach to accelerating sparse workloads by introducing accelerators that leverage the on-chip cache subsystem, with minimal programmer intervention.


2021 ◽  
Vol 13 (2) ◽  
pp. 54
Author(s):  
Yazhi Liu ◽  
Jiye Zhang ◽  
Wei Li ◽  
Qianqian Wu ◽  
Pengmiao Li

A data center undertakes increasing background services of various applications, and the data flows transmitted between the nodes in data center networks (DCNs) are consequently increased. At the same time, the traffic of each link in a DCN changes dynamically over time. Flow scheduling algorithms can improve the distribution of data flows among the network links so as to improve the balance of link loads in a DCN. However, most current load balancing works achieve flow scheduling decisions to the current links on the basis of past link flow conditions. This situation impedes the existing link scheduling methods from implementing optimal decisions for scheduling data flows among the network links in a DCN. This paper proposes a predictive link load balance routing algorithm for a DCN based on residual networks (ResNet), i.e., the link load balance route (LLBR) algorithm. The LLBR algorithm predicts the occupancy of the network links in the next duty cycle, according to the ResNet architecture, and then the optimal traffic route is selected according to the predictive network environment. The LLBR algorithm, round-robin scheduling (RRS), and weighted round-robin scheduling (WRRS) are used in the same experimental environment. Experimental results show that compared with the WRRS and RRS, the LLBR algorithm can reduce the transmission time by approximately 50%, reduce the packet loss rate from 0.05% to 0.02%, and improve the bandwidth utilization by 30%.


2020 ◽  
Vol 43 ◽  
Author(s):  
Myrthe Faber

Abstract Gilead et al. state that abstraction supports mental travel, and that mental travel critically relies on abstraction. I propose an important addition to this theoretical framework, namely that mental travel might also support abstraction. Specifically, I argue that spontaneous mental travel (mind wandering), much like data augmentation in machine learning, provides variability in mental content and context necessary for abstraction.


2020 ◽  
Author(s):  
Mohammed J. Zaki ◽  
Wagner Meira, Jr
Keyword(s):  

2020 ◽  
Author(s):  
Marc Peter Deisenroth ◽  
A. Aldo Faisal ◽  
Cheng Soon Ong
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document