scholarly journals Efficient Network Telemetry based on Traffic Awareness

Author(s):  
Cesar Gomez ◽  
Abdallah Shami ◽  
Xianbing Wang

Network Telemetry (NT) is a crucial component in today’s networks, as it provides the network managers with important data about the status and behavior of the network elements. NT data are then utilized to get insights and rapidly take actions to improve the network performance or avoid its degradation. Intuitively, the more data are collected, the better for the network managers. However, the gathering and transportation of excessive NT data might produce an adverse effect, leading to a paradox: the data that are supposed to help actually damage the network performance. This is the motivation to introduce a novel NT framework that dynamically adjusts the rate in which the NT data should be transmitted. In this work, we present an NT scheme that is traffic-aware, meaning that the network elements collect and send NT data based on the type of traffic that they forward. The evaluation results of our Machine Learning-based mechanism show that it is possible to reduce by over 75% the network bandwidth overhead that a conventional NT scheme produces.

2021 ◽  
Author(s):  
Cesar Gomez ◽  
Abdallah Shami ◽  
Xianbing Wang

Network Telemetry (NT) is a crucial component in today’s networks, as it provides the network managers with important data about the status and behavior of the network elements. NT data are then utilized to get insights and rapidly take actions to improve the network performance or avoid its degradation. Intuitively, the more data are collected, the better for the network managers. However, the gathering and transportation of excessive NT data might produce an adverse effect, leading to a paradox: the data that are supposed to help actually damage the network performance. This is the motivation to introduce a novel NT framework that dynamically adjusts the rate in which the NT data should be transmitted. In this work, we present an NT scheme that is traffic-aware, meaning that the network elements collect and send NT data based on the type of traffic that they forward. The evaluation results of our Machine Learning-based mechanism show that it is possible to reduce by over 75% the network bandwidth overhead that a conventional NT scheme produces.


2021 ◽  
Vol 60 (2) ◽  
pp. 403-415
Author(s):  
Mark Philp

AbstractThe frequent references to the actors and events of the French Revolutionary and Napoleonic wars in the titles of the dance tunes of the period raise the question of how we should understand their significance. This article argues that the practice is one of a number of examples of music and song shaping people's lived experience and behavior in ways that were rarely fully conscious. Drawing on a range of music collections, diaries, and journals, the article argues that we need to recognize how significant aural dimensions were in shaping people's predisposition to favor the status quo in this period of heightened political controversy.


2021 ◽  
Vol 11 (15) ◽  
pp. 6787
Author(s):  
Jože M. Rožanec ◽  
Blaž Kažič ◽  
Maja Škrjanc ◽  
Blaž Fortuna ◽  
Dunja Mladenić

Demand forecasting is a crucial component of demand management, directly impacting manufacturing companies’ planning, revenues, and actors through the supply chain. We evaluate 21 baseline, statistical, and machine learning algorithms to forecast smooth and erratic demand on a real-world use case scenario. The products’ data were obtained from a European original equipment manufacturer targeting the global automotive industry market. Our research shows that global machine learning models achieve superior performance than local models. We show that forecast errors from global models can be constrained by pooling product data based on the past demand magnitude. We also propose a set of metrics and criteria for a comprehensive understanding of demand forecasting models’ performance.


Electronics ◽  
2021 ◽  
Vol 10 (15) ◽  
pp. 1774
Author(s):  
Ming-Chin Chuang ◽  
Chia-Cheng Yen ◽  
Chia-Jui Hung

Recently, with the increase in network bandwidth, various cloud computing applications have become popular. A large number of network data packets will be generated in such a network. However, most existing network architectures cannot effectively handle big data, thereby necessitating an efficient mechanism to reduce task completion time when large amounts of data are processed in data center networks. Unfortunately, achieving the minimum task completion time in the Hadoop system is an NP-complete problem. Although many studies have proposed schemes for improving network performance, they have shortcomings that degrade their performance. For this reason, in this study, we propose a centralized solution, called the bandwidth-aware rescheduling (BARE) mechanism for software-defined network (SDN)-based data center networks. BARE improves network performance by employing a prefetching mechanism and a centralized network monitor to collect global information, sorting out the locality data process, splitting tasks, and executing a rescheduling mechanism with a scheduler to reduce task completion time. Finally, we used simulations to demonstrate our scheme’s effectiveness. Simulation results show that our scheme outperforms other existing schemes in terms of task completion time and the ratio of data locality.


2021 ◽  
Vol 13 (3) ◽  
pp. 63
Author(s):  
Maghsoud Morshedi ◽  
Josef Noll

Video conferencing services based on web real-time communication (WebRTC) protocol are growing in popularity among Internet users as multi-platform solutions enabling interactive communication from anywhere, especially during this pandemic era. Meanwhile, Internet service providers (ISPs) have deployed fiber links and customer premises equipment that operate according to recent 802.11ac/ax standards and promise users the ability to establish uninterrupted video conferencing calls with ultra-high-definition video and audio quality. However, the best-effort nature of 802.11 networks and the high variability of wireless medium conditions hinder users experiencing uninterrupted high-quality video conferencing. This paper presents a novel approach to estimate the perceived quality of service (PQoS) of video conferencing using only 802.11-specific network performance parameters collected from Wi-Fi access points (APs) on customer premises. This study produced datasets comprising 802.11-specific network performance parameters collected from off-the-shelf Wi-Fi APs operating at 802.11g/n/ac/ax standards on both 2.4 and 5 GHz frequency bands to train machine learning algorithms. In this way, we achieved classification accuracies of 92–98% in estimating the level of PQoS of video conferencing services on various Wi-Fi networks. To efficiently troubleshoot wireless issues, we further analyzed the machine learning model to correlate features in the model with the root cause of quality degradation. Thus, ISPs can utilize the approach presented in this study to provide predictable and measurable wireless quality by implementing a non-intrusive quality monitoring approach in the form of edge computing that preserves customers’ privacy while reducing the operational costs of monitoring and data analytics.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 621
Author(s):  
Maghsoud Morshedi ◽  
Josef Noll

Video on demand (VoD) services such as YouTube have generated considerable volumes of Internet traffic in homes and buildings in recent years. While Internet service providers deploy fiber and recent wireless technologies such as 802.11ax to support high bandwidth requirement, the best-effort nature of 802.11 networks and variable wireless medium conditions hinder users from experiencing maximum quality during video streaming. Hence, Internet service providers (ISPs) have an interest in monitoring the perceived quality of service (PQoS) in customer premises in order to avoid customer dissatisfaction and churn. Since existing approaches for estimating PQoS or quality of experience (QoE) requires external measurement of generic network performance parameters, this paper presents a novel approach to estimate the PQoS of video streaming using only 802.11 specific network performance parameters collected from wireless access points. This study produced datasets comprising 802.11n/ac/ax specific network performance parameters labelled with PQoS in the form of mean opinion scores (MOS) to train machine learning algorithms. As a result, we achieved as many as 93–99% classification accuracy in estimating PQoS by monitoring only 802.11 parameters on off-the-shelf Wi-Fi access points. Furthermore, the 802.11 parameters used in the machine learning model were analyzed to identify the cause of quality degradation detected on the Wi-Fi networks. Finally, ISPs can utilize the results of this study to provide predictable and measurable wireless quality by implementing non-intrusive monitoring of customers’ perceived quality. In addition, this approach reduces customers’ privacy concerns while reducing the operational cost of analytics for ISPs.


Author(s):  
Konstantinos Poularakis ◽  
Leandros Tassiulas

A significant portion of today's network traffic is due to recurring downloads of a few popular contents. It has been observed that replicating the latter in caches installed at network edges—close to users—can drastically reduce network bandwidth usage and improve content access delay. Such caching architectures are gaining increasing interest in recent years as a way of dealing with the explosive traffic growth, fuelled further by the downward slope in storage space price. In this work, we provide an overview of caching with a particular emphasis on emerging network architectures that enable caching at the radio access network. In this context, novel challenges arise due to the broadcast nature of the wireless medium, which allows simultaneously serving multiple users tuned into a multicast stream, and the mobility of the users who may be frequently handed off from one cell tower to another. Existing results indicate that caching at the wireless edge has a great potential in removing bottlenecks on the wired backbone networks. Taking into consideration the schedule of multicast service and mobility profiles is crucial to extract maximum benefit in network performance.


2019 ◽  
Vol 214 ◽  
pp. 08009 ◽  
Author(s):  
Matthias J. Schnepf ◽  
R. Florian von Cube ◽  
Max Fischer ◽  
Manuel Giffels ◽  
Christoph Heidecker ◽  
...  

Demand for computing resources in high energy physics (HEP) shows a highly dynamic behavior, while the provided resources by the Worldwide LHC Computing Grid (WLCG) remains static. It has become evident that opportunistic resources such as High Performance Computing (HPC) centers and commercial clouds are well suited to cover peak loads. However, the utilization of these resources gives rise to new levels of complexity, e.g. resources need to be managed highly dynamically and HEP applications require a very specific software environment usually not provided at opportunistic resources. Furthermore, aspects to consider are limitations in network bandwidth causing I/O-intensive workflows to run inefficiently. The key component to dynamically run HEP applications on opportunistic resources is the utilization of modern container and virtualization technologies. Based on these technologies, the Karlsruhe Institute of Technology (KIT) has developed ROCED, a resource manager to dynamically integrate and manage a variety of opportunistic resources. In combination with ROCED, HTCondor batch system acts as a powerful single entry point to all available computing resources, leading to a seamless and transparent integration of opportunistic resources into HEP computing. KIT is currently improving the resource management and job scheduling by focusing on I/O requirements of individual workflows, available network bandwidth as well as scalability. For these reasons, we are currently developing a new resource manager, called TARDIS. In this paper, we give an overview of the utilized technologies, the dynamic management, and integration of resources as well as the status of the I/O-based resource and job scheduling.


VLSI Design ◽  
2007 ◽  
Vol 2007 ◽  
pp. 1-11 ◽  
Author(s):  
Srinivasan Murali ◽  
David Atienza ◽  
Luca Benini ◽  
Giovanni De Micheli

Networks on Chips (NoCs) are required to tackle the increasing delay and poor scalability issues of bus-based communication architectures. Many of today's NoC designs are based on single path routing. By utilizing multiple paths for routing, congestion in the network is reduced significantly, which translates to improved network performance or reduced network bandwidth requirements and power consumption. Multiple paths can also be utilized to achieve spatial redundancy, which helps in achieving tolerance against faults or errors in the NoC. A major problem with multipath routing is that packets can reach the destination in an out-of-order fashion, while many applications require in-order packet delivery. In this work, we present a multipath routing strategy that guarantees in-order packet delivery for NoCs. It is based on the idea of routing packets on partially nonintersecting paths and rebuilding packet order at path reconvergent nodes. We present a design methodology that uses the routing strategy to optimally spread the traffic in the NoC to minimize the network bandwidth needs and power consumption. We also integrate support for tolerance against transient and permanent failures in the NoC links in the methodology by utilizing spatial and temporal redundancy for transporting packets. Our experimental studies show large reduction in network bandwidth requirements (36.86% on average) and power consumption (30.51% on average) compared to single-path systems. The area overhead of the proposed scheme is small (a modest 5% increase in network area). Hence, it is practical to be used in the on-chip domain.


Sign in / Sign up

Export Citation Format

Share Document