Two Use Cases of Machine Learning for SDN-Enabled IP/Optical Networks: Traffic Matrix Prediction and Optical Path Performance Prediction [Invited]

2018 ◽  
Vol 10 (10) ◽  
pp. D52 ◽  
Author(s):  
Gagan Choudhury ◽  
David Lynch ◽  
Gaurav Thakur ◽  
Simon Tse
2021 ◽  
Vol 31 (2) ◽  
pp. 1-28
Author(s):  
Gopinath Chennupati ◽  
Nandakishore Santhi ◽  
Phill Romero ◽  
Stephan Eidenbenz

Hardware architectures become increasingly complex as the compute capabilities grow to exascale. We present the Analytical Memory Model with Pipelines (AMMP) of the Performance Prediction Toolkit (PPT). PPT-AMMP takes high-level source code and hardware architecture parameters as input and predicts runtime of that code on the target hardware platform, which is defined in the input parameters. PPT-AMMP transforms the code to an (architecture-independent) intermediate representation, then (i) analyzes the basic block structure of the code, (ii) processes architecture-independent virtual memory access patterns that it uses to build memory reuse distance distribution models for each basic block, and (iii) runs detailed basic-block level simulations to determine hardware pipeline usage. PPT-AMMP uses machine learning and regression techniques to build the prediction models based on small instances of the input code, then integrates into a higher-order discrete-event simulation model of PPT running on Simian PDES engine. We validate PPT-AMMP on four standard computational physics benchmarks and present a use case of hardware parameter sensitivity analysis to identify bottleneck hardware resources on different code inputs. We further extend PPT-AMMP to predict the performance of a scientific application code, namely, the radiation transport mini-app SNAP. To this end, we analyze multi-variate regression models that accurately predict the reuse profiles and the basic block counts. We validate predicted SNAP runtimes against actual measured times.


2021 ◽  
Vol 2 ◽  
pp. 564-574
Author(s):  
Andrea D'Amico ◽  
Stefano Straullu ◽  
Giacomo Borraccini ◽  
Elliot London ◽  
Stefano Bottacchi ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1578
Author(s):  
Daniel Szostak ◽  
Adam Włodarczyk ◽  
Krzysztof Walkowiak

Rapid growth of network traffic causes the need for the development of new network technologies. Artificial intelligence provides suitable tools to improve currently used network optimization methods. In this paper, we propose a procedure for network traffic prediction. Based on optical networks’ (and other network technologies) characteristics, we focus on the prediction of fixed bitrate levels called traffic levels. We develop and evaluate two approaches based on different supervised machine learning (ML) methods—classification and regression. We examine four different ML models with various selected features. The tested datasets are based on real traffic patterns provided by the Seattle Internet Exchange Point (SIX). Obtained results are analyzed using a new quality metric, which allows researchers to find the best forecasting algorithm in terms of network resources usage and operational costs. Our research shows that regression provides better results than classification in case of all analyzed datasets. Additionally, the final choice of the most appropriate ML algorithm and model should depend on the network operator expectations.


2021 ◽  
Vol 14 (10) ◽  
pp. 1797-1804
Author(s):  
Dimitrios Koutsoukos ◽  
Supun Nakandala ◽  
Konstantinos Karanasos ◽  
Karla Saur ◽  
Gustavo Alonso ◽  
...  

Deep Learning (DL) has created a growing demand for simpler ways to develop complex models and efficient ways to execute them. Thus, a significant effort has gone into frameworks like PyTorch or TensorFlow to support a variety of DL models and run efficiently and seamlessly over heterogeneous and distributed hardware. Since these frameworks will continue improving given the predominance of DL workloads, it is natural to ask what else can be done with them. This is not a trivial question since these frameworks are based on the efficient implementation of tensors, which are well adapted to DL but, in principle, to nothing else. In this paper we explore to what extent Tensor Computation Runtimes (TCRs) can support non-ML data processing applications, so that other use cases can take advantage of the investments made on TCRs. In particular, we are interested in graph processing and relational operators, two use cases very different from ML, in high demand, and complement quite well what TCRs can do today. Building on HUMMINGBIRD, a recent platform converting traditional machine learning algorithms to tensor computations, we explore how to map selected graph processing and relational operator algorithms into tensor computations. Our vision is supported by the results: our code often outperforms custom-built C++ and CUDA kernels, while massively reducing the development effort, taking advantage of the cross-platform compilation capabilities of TCRs.


2019 ◽  
Vol 212 ◽  
pp. 1210-1223 ◽  
Author(s):  
Wen Jiang ◽  
Xianjun Xing ◽  
Shan Li ◽  
Xianwen Zhang ◽  
Wenquan Wang

Sign in / Sign up

Export Citation Format

Share Document