congested networks
Recently Published Documents


TOTAL DOCUMENTS

114
(FIVE YEARS 22)

H-INDEX

21
(FIVE YEARS 1)

Author(s):  
Robert Aboolian ◽  
Oded Berman ◽  
Majid Karimi

This paper focuses on designing a facility network, taking into account that the system may be congested. The objective is to minimize the overall fixed and service capacity costs, subject to the constraints that for any demand the disutility from travel and waiting times (measured as the weighted sum of the travel time from a demand to the facility serving that demand and the average waiting time at the facility) cannot exceed a predefined maximum allowed level (measured in units of time). We develop an analytical framework for the problem that determines the optimal set of facilities and assigns each facility a service rate (service capacity). In our setting, the consumers would like to maximize their utility (minimize their disutility) when choosing which facility to patronize. Therefore, the eventual choice of facilities is a user-equilibrium problem, where at equilibrium, consumers do not have any incentive to change their choices. The problem is formulated as a nonlinear mixed-integer program. We show how to linearize the nonlinear constraints and solve instead a mixed-integer linear problem, which can be solved efficiently.


2021 ◽  
Vol 7 ◽  
pp. e754
Author(s):  
Arif Husen ◽  
Muhammad Hasanain Chaudary ◽  
Farooq Ahmad ◽  
Muhammad Imtiaz Alam ◽  
Abid Sohail ◽  
...  

With continuously rising trends in applications of information and communication technologies in diverse sectors of life, the networks are challenged to meet the stringent performance requirements. Increasing the bandwidth is one of the most common solutions to ensure that suitable resources are available to meet performance objectives such as sustained high data rates, minimal delays, and restricted delay variations. Guaranteed throughput, minimal latency, and the lowest probability of loss of the packets can ensure the quality of services over the networks. However, the traffic volumes that networks need to handle are not fixed and it changes with time, origin, and other factors. The traffic distributions generally follow some peak intervals and most of the time traffic remains on moderate levels. The network capacity determined by peak interval demands often requires higher capacities in comparison to the capacities required during the moderate intervals. Such an approach increases the cost of the network infrastructure and results in underutilized networks in moderate intervals. Suitable methods that can increase the network utilization in peak and moderate intervals can help the operators to contain the cost of network intrastate. This article proposes a novel technique to improve the network utilization and quality of services over networks by exploiting the packet scheduling-based erlang distribution of different serving areas. The experimental results show that significant improvement can be achieved in congested networks during the peak intervals with the proposed approach both in terms of utilization and quality of service in comparison to the traditional approaches of packet scheduling in the networks. Extensive experiments have been conducted to study the effects of the erlang-based packet scheduling in terms of packet-loss, end-to-end latency, delay variance and network utilization.


2021 ◽  
Author(s):  
Bruce Spang ◽  
Veronica Hannan ◽  
Shravya Kunamalla ◽  
Te-Yuan Huang ◽  
Nick McKeown ◽  
...  
Keyword(s):  

Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 948
Author(s):  
Carlos Eduardo Maffini Santos ◽  
Carlos Alexandre Gouvea da Silva ◽  
Carlos Marcelo Pedroso

Quality of service (QoS) requirements for live streaming are most required for video-on-demand (VoD), where they are more sensitive to variations in delay, jitter, and packet loss. Dynamic Adaptive Streaming over HTTP (DASH) is the most popular technology for live streaming and VoD, where it has been massively deployed on the Internet. DASH is an over-the-top application using unmanaged networks to distribute content with the best possible quality. Widely, it uses large reception buffers in order to keep a seamless playback for VoD applications. However, the use of large buffers in live streaming services is not allowed because of the induced delay. Hence, network congestion caused by insufficient queues could decrease the user-perceived video quality. Active Queue Management (AQM) arises as an alternative to control the congestion in a router’s queue, pressing the TCP traffic sources to reduce their transmission rate when it detects incipient congestion. As a consequence, the DASH client tends to decrease the quality of the streamed video. In this article, we evaluate the performance of recent AQM strategies for real-time adaptive video streaming and propose a new AQM algorithm using Long Short-Term Memory (LSTM) neural networks to improve the user-perceived video quality. The LSTM forecast the trend of queue delay to allow earlier packet discard in order to avoid the network congestion. The results show that the proposed method outperforms the competing AQM algorithms, mainly in scenarios where there are congested networks.


Electronics ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 639
Author(s):  
Arūnas Statkus ◽  
Šarūnas Paulikas ◽  
Audrius Krukonis

Paper investigates transport control protocol (TCP) acknowledgment (ACK) optimization in low power or embedded devices to improve their performance on high-speed links by limiting the ACK rate. Today the dominant protocol for interconnecting network devices is the TCP and it has a great influence on the entire network operation if the processing power of network devices is exhausted to the processing data from the TCP stack. Therefore, on high-speed not congested networks the bottleneck is no longer the network link but low-processing power network devices. A new ACK optimization algorithm has been developed and implemented in the Linux kernel. Proposed TCP stack modification minimizes the unneeded technical expenditure from TCP flow by reducing the number of ACKs. The results of performed experiments show that TCP ACK rate limiting leads to the noticeable decrease of CPU utilization on low power devices and an increase of TCP session throughput but does not impact other TCP QoS parameters, such as session stability, flow control, connection management, congestion control or compromises link security. Therefore, more resources of the low-power network devices could be allocated for high-speed data transfer.


2020 ◽  
Vol 21 (4) ◽  
pp. 245-254
Author(s):  
Guido Cantelmo ◽  
Francesco Viti

AbstractThe origin-destination (OD) demand estimation problem is a classical problem in transport planning and management. Traditionally, this problem has been solved using traffic counts, speeds or travel times extracted from location-based sensor data. With the advent of new sensing technologies located on vehicles (GPS) and nomadic devices (mobile and smartphones), new opportunities have emerged to improve the estimation accuracy and reliability, and more importantly to better capture the dynamics of the daily mobility patterns. In this paper we frame this new data in a comprehensive framework which estimates origin-destination flows in two steps: the first step estimates the total generated demand for each traffic zone, while the second step adjusts the spatial and temporal distribution on the different OD pairs. We show how mobile data can be used to obtain OD matrices that reflect the aggregated movements of individuals in complex and large-scale instances, while speed information from floating car data can be used in the second step. We showcase the added value of big data on a realistic network comprising Luxembourg’s capital city and its surrounding. We simulate traffic by means of a commercial simulation software, PTV-Visum, and leverage real mobile phone data from the largest telco operator in the country and real speed data from a floating car data service provider. Results show how OD estimation improves both in solution reliability and in convergence speed.


2020 ◽  
Vol 2 (3) ◽  
Author(s):  
Timoteo Carletti ◽  
Malbor Asllani ◽  
Duccio Fanelli ◽  
Vito Latora

Author(s):  
Mada’ Abdel Jawad ◽  
Saeed Salah ◽  
Raid Zaghal

<p class="0abstractCxSpFirst">Mobile Ad-Hoc Networks (MANETs) are characterized as decentralized control networks. The mobile nodes route and forward data based on their routing information without the need for routing devices. In this type of networks, nodes move in an unstructured environment where some nodes are still fixed, others are moving in a constant velocity, and others move with diverse velocities; and thus, they need special protocols to keep track of network changes and velocity changes among the nodes. Destination Sequenced Distance-Vector (DSDV) routing protocol is one of the most popular proactive routing protocols for wireless networks. This protocol has a good performance in general, but with high speed nodes and congested networks its performance degrades quickly.</p><p class="0abstractCxSpLast">In this paper we propose an extension to the DSDV (we call it Diverse-Velocity DSDV) to address this problem. The main idea is to modify the protocol to include node speed, determine update intervals and the duration of settling time. To evaluate the performance of the new protocol, we have carried a number of simulation scenarios using the Network Simulator tool (NS-3) and measured relevant parameters such as: packet delivery ratio, throughput, end-to-end delay, and routing overhead. We have compared our results with the original DSDV and some of its new variants. The new protocol has demonstrated a noticeable improvement of performance in all scenarios, and the measured performance metrics outperform the others except the average delay where the performance of the new protocol was modest.</p>


2020 ◽  
Vol 12 (2) ◽  
pp. 28
Author(s):  
Beniamino Di Martino ◽  
Salvatore Venticinque ◽  
Antonio Esposito ◽  
Salvatore D’Angelo

Internet of Things (IoT) is becoming a widespread reality, as interconnected smart devices and sensors have overtaken the IT market and invaded every aspect of the human life. This kind of development, while already foreseen by IT experts, implies additional stress to already congested networks, and may require further investments in computational power when considering centralized and Cloud based solutions. That is why a common trend is to rely on local resources, provided by smart devices themselves or by aggregators, to deal with part of the required computations: this is the base concept behind Fog Computing, which is becoming increasingly adopted as a distributed calculation solution. In this paper a methodology, initially developed within the TOREADOR European project for the distribution of Big Data computations over Cloud platforms, will be described and applied to an algorithm for the prediction of energy consumption on the basis of data coming from home sensors, already employed within the CoSSMic European Project. The objective is to demonstrate that, by applying such a methodology, it is possible to improve the calculation performances and reduce communication with centralized resources.


Sign in / Sign up

Export Citation Format

Share Document