network utilization
Recently Published Documents


TOTAL DOCUMENTS

198
(FIVE YEARS 56)

H-INDEX

15
(FIVE YEARS 2)

Author(s):  
Aymen Hasan Alawadi ◽  
Sándor Molnár

AbstractData center networks (DCNs) act as critical infrastructures for emerging technologies. In general, a DCN involves a multi-rooted tree with various shortest paths of equal length from end to end. The DCN fabric must be maintained and monitored to guarantee high availability and better QoS. Traditional traffic engineering (TE) methods frequently reroute large flows based on the shortest and least-congested paths to maintain high service availability. This procedure results in a weak link utilization with frequent packet reordering. Moreover, DCN link failures are typical problems. State-of-the-art approaches address such challenges by modifying the network components (switches or hosts) to discover and avoid broken connections. This study proposes Oddlab (Odds labels), a novel deployable TE method to guarantee the QoS of multi-rooted data center (DC) traffic in symmetric and asymmetric modes. Oddlab creatively builds a heuristic model for efficient flow scheduling and faulty link detection by exclusively using the gathered statistics from the DCN data plane, such as residual bandwidth and the number of installed elephant flows. Besides, the proposed method is implemented in an SDN-based DCN without altering the network components. Our findings indicate that Oddlab can minimize the flow completion time, maximize bisection bandwidth, improve network utilization, and recognize faulty links with sufficient accuracy to improve DC productivity.


2021 ◽  
Author(s):  
Tariq Ahanger

Abstract The technological revolution brought by the Internet of Things (IoT) has mostly relied on cloud computing. However, to satisfy the demands of timesensitive services in the medical industry, Fog Computing, a novel computational platform based on the cloud computing paradigm, has shown to be a useful tool by extending cloud resources to the network’s edge. The current paper examines the role of the fog paradigm in the domain of healthcare decision-making, focusing on its primary advantages in terms of latency, network utilization, and power consumption. A fog-computing based health assessment framework is developed in the current paper. Moreover, based on effective performance parameters, the performance is evaluated and depicted. The results show that the presented strategy can reduce network congestion of the communication network by analyzing information at the local node. Moreover, increased security on health information can be maintained at local fog-node and enhanced data protection from unauthorized access can be acquired. Fog computing offers greater insights into the health condition of patients with enhanced accuracy, precision, reliability and stability.


2021 ◽  
Author(s):  
Kyle MacMillan ◽  
Tarun Mangla ◽  
James Saxon ◽  
Nick Feamster

2021 ◽  
Vol 7 ◽  
pp. e754
Author(s):  
Arif Husen ◽  
Muhammad Hasanain Chaudary ◽  
Farooq Ahmad ◽  
Muhammad Imtiaz Alam ◽  
Abid Sohail ◽  
...  

With continuously rising trends in applications of information and communication technologies in diverse sectors of life, the networks are challenged to meet the stringent performance requirements. Increasing the bandwidth is one of the most common solutions to ensure that suitable resources are available to meet performance objectives such as sustained high data rates, minimal delays, and restricted delay variations. Guaranteed throughput, minimal latency, and the lowest probability of loss of the packets can ensure the quality of services over the networks. However, the traffic volumes that networks need to handle are not fixed and it changes with time, origin, and other factors. The traffic distributions generally follow some peak intervals and most of the time traffic remains on moderate levels. The network capacity determined by peak interval demands often requires higher capacities in comparison to the capacities required during the moderate intervals. Such an approach increases the cost of the network infrastructure and results in underutilized networks in moderate intervals. Suitable methods that can increase the network utilization in peak and moderate intervals can help the operators to contain the cost of network intrastate. This article proposes a novel technique to improve the network utilization and quality of services over networks by exploiting the packet scheduling-based erlang distribution of different serving areas. The experimental results show that significant improvement can be achieved in congested networks during the peak intervals with the proposed approach both in terms of utilization and quality of service in comparison to the traditional approaches of packet scheduling in the networks. Extensive experiments have been conducted to study the effects of the erlang-based packet scheduling in terms of packet-loss, end-to-end latency, delay variance and network utilization.


2021 ◽  
Vol 11 (19) ◽  
pp. 9163
Author(s):  
Mateusz Żotkiewicz ◽  
Wiktor Szałyga ◽  
Jaroslaw Domaszewicz ◽  
Andrzej Bąk ◽  
Zbigniew Kopertowski ◽  
...  

The new generation of programmable networks allow mechanisms to be deployed for the efficient control of dynamic bandwidth allocation and ensure Quality of Service (QoS) in terms of Key Performance Indicators (KPIs) for delay or loss sensitive Internet of Things (IoT) services. To achieve flexible, dynamic and automated network resource management in Software-Defined Networking (SDN), Artificial Intelligence (AI) algorithms can provide an effective solution. In the paper, we propose the solution for network resources allocation, where the AI algorithm is responsible for controlling intent-based routing in SDN. The paper focuses on the problem of optimal switching of intents between two designated paths using the Deep-Q-Learning approach based on an artificial neural network. The proposed algorithm is the main novelty of this paper. The Developed Networked Application Emulation System (NAPES) allows the AI solution to be tested with different patterns to evaluate the performance of the proposed solution. The AI algorithm was trained to maximize the total throughput in the network and effective network utilization. The results presented confirm the validity of applied AI approach to the problem of improving network performance in next-generation networks and the usefulness of the NAPES traffic generator for efficient economical and technical deployment in IoT networking systems evaluation.


2021 ◽  
Author(s):  
Min Guk I. Chi

The premise that Active Queue Management (AQM) is effective in both quantitative and qualitative settings in residential and enterprise networks has repeatedly been established in multiple papers from academic journals along with private studies in addressing bufferbloat, characterized as excessive latency because of heavy network utilization. However, the presence and understanding of bufferbloat mitigation is absent and not well-known in the Philippine Internet of Things space except enthusiasts, willing to take the time to examine the concept along with its benefits. Hence, this paper examines possible reasons as to why AQM is not widely adopted by Philippine consumers and industries in increasing productivity considering the COVID-19 Pandemic: a lack of basic understanding of bufferbloat and its implications, the complexity of the concept, the know-how required to execute its implementation being far too high, and the lack of perceived benefit by existing telecommunications players in the country.


2021 ◽  
Vol 2021 (1) ◽  
pp. 13353
Author(s):  
Raina A. Brands ◽  
Tiziana Casciaro ◽  
Jasmien Khattab ◽  
Eric Quintane
Keyword(s):  

Author(s):  
Junfan Yu ◽  
Saskia De Klerk ◽  
Michael Hess

AbstractThis research focuses on how entrepreneurs utilize cronyism to acquire resources. A case study method allowed us to explore three firms in the private property development industry in China. These firms uniquely cultivated cronyism and achieved distinctly different outcomes. Our findings highlight Chinese entrepreneurs in start-up ventures and later-stage enterprises employ cronyism. The underlying rationale for using cronyism have common and heterogeneous motivations. The similarity and distinguishing rationale also apply to the impact of cronyism. We also find two contingency working mechanisms for cronyism: entrepreneurial characteristics and a staged model for cronyism. With the firm’s growth, cronyism remains important, but firms with more community involvement outperform others. This research contributes to the theory on strategic network utilization for resource acquisition during entrepreneurial development stages. We investigate how entrepreneurial strategies can assist in adapting to the “rules of the game” while utilizing resources within the set contextual constraints.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Yao Xu ◽  
Renren Wang ◽  
Hongqian Lu ◽  
Xingxing Song ◽  
Yahan Deng ◽  
...  

This paper discusses the adaptive event-triggered synchronization problem of a class of neural networks (NNs) with time-varying delay and actuator saturation. First, in view of the limited communication channel capacity of the network system and unnecessary data transmission in the NCSs, an adaptive event-triggered scheme (AETS) is introduced to reduce the network load and improve network utilization. Second, under the AETS, the synchronization error model of the delayed master-slave synchronization system is constructed with actuator saturation. Third, based on Lyapunov–Krasovskii functional (LKF), a new sufficient criterion to guarantee the asymptotic stability of the synchronization error system is derived. Moreover, by solving the stability criterion expressed in the form of a set of linear matrix inequalities (LMIs), some necessary parameters of the system are obtained. At last, two examples are expressed to demonstrate the feasibility of this method.


2021 ◽  
Vol 8 (3) ◽  
pp. 525
Author(s):  
Hawwin Purnama Akbar ◽  
Achmad Basuki ◽  
Eko Setiawan

<p><em>Distributed Storage System </em>(DSS) memiliki sejumlah perangkat server penyimpanan yang terhubung dengan banyak perangkat <em>switch</em> untuk meningkatkan utilisasi jaringan. DSS harus memperhatikan keseimbangan beban pada sisi server penyimpanan dan trafik data pada semua jalur yang terhubung. Jika beban pada sisi server penyimpanan dan trafik data tidak seimbang, maka dapat menyebabkan <em>bottleneck network</em> yang menurunkan utilisasi jaringan. Kombinasi <em>server</em> dan <em>link</em> <em>load balancing</em> adalah solusi yang tepat untuk menyeimbangkan beban pada sisi server penyimpanan dan trafik data. Penelitian ini mengusulkan metode kombinasi algoritme <em>least connection</em> sebagai metode <em>server</em><em>-load balancing</em> dan algoritme <em>global first fit</em> sebagai metode <em>link</em><em> load balancing</em>. Algoritme <em>global first fit</em> merupakan salah satu dari algoritme <em>load balancing</em> <em>hedera</em> yang bertujuan untuk menyeimbangkan trafik data berukuran besar (10% dari <em>bandwidth</em>), sehingga terhindar dari permasalahan <em>bottleneck network</em>. Algoritme <em>least connection</em> merupakan salah satu algoritme <em>server</em><em> load</em><em> balancing</em> yang menggunakan jumlah total koneksi dari server untuk menentukan prioritas server. Hasil evaluasi kombinasi metode tersebut didapatkan peningkatan pada rata-rata <em>throughput </em>sebesar 77,9% dibanding hasil metode <em>Equal Cost Multi Path </em>(ECMP) dan <em>Round robin </em>(RR). Peningkatan pada rata-rata penggunaan <em>bandwidth</em> sebesar 65,2% dibanding hasil metode ECMP dan RR. Hasil Penggunaan <em>CPU</em> dan <em>memory</em> pada <em>server</em> di metode kombinasi ini juga terjadi penurunan beban <em>CPU</em> sebesar 34,29% dan penurunan beban penggunaan <em>memory</em> sebesar 9,8% dibanding metode ECMP dan RR. Dari hasil evaluasi, penerapan metode kombinasi metode <em>server</em> dan <em>link load balancing</em> berhasil meningkatkan utilisasi jaringan dan juga mengurangi beban server.</p><p> </p><p><em><strong>Abstract</strong></em></p><p class="Judul2"><em>Distributed Storage System (DSS) has a number of storage server devices that are connected to multiple switch devices to increase network utilization. DSS must pay attention to the balance of the load on the storage server side and data traffic on all connected lines. If the load on the storage server side and data traffic is not balanced, it can cause a network bottleneck that reduces network utilization. The combination of server and link-load balancing is the right solution to balance the load on the server side of storage and data traffic. This study proposes a combination of the least connection algorithm as a server-load balancing method and the global first fit algorithm as a link-load balancing method. The global first fit algorithm is one of Hedera's load balancing algorithms which aims to balance large data traffic (10% of bandwidth), so as to avoid network bottleneck problems. Least connection algorithm is one of the server balancing algorithms that uses the total number of connections from the server to determine server priority. The results of the evaluation of the combination of these methods showed an increase in the average throughput of 77.9% compared to the results of the Equal Cost Multi Path (ECMP) and Round robin (RR) methods. The increase in the average bandwidth usage is 65.2% compared to the results of the ECMP and RR methods. The results of CPU and memory usage on the server in this combination method also decreased CPU load by 34.29% and a decrease in memory usage load by 9.8% compared to the ECMP and RR methods. From the evaluation results, the application of a combination of the server method and the link load balancing method has succeeded in increasing network utilization and also reducing server load.</em></p><p><em><strong><br /></strong></em></p>


Sign in / Sign up

Export Citation Format

Share Document