cpu utilization
Recently Published Documents


TOTAL DOCUMENTS

107
(FIVE YEARS 34)

H-INDEX

8
(FIVE YEARS 2)

Author(s):  
Prerana Shenoy S. P. ◽  
Sai Vishnu Soudri ◽  
Ramakanth Kumar P. ◽  
Sahana Bailuguttu

Observability is the ability for us to monitor the state of the system, which involves monitoring standard metrics like central processing unit (CPU) utilization, memory usage, and network bandwidth. The more we can understand the state of the system, the better we can improve the performance by recognizing unwanted behavior, improving the stability and reliability of the system. To achieve this, it is essential to build an automated monitoring system that is easy to use and efficient in its working. To do so, we have built a Kubernetes operator that automates the deployment and monitoring of applications and notifies unwanted behavior in real time. It also enables the visualization of the metrics generated by the application and allows standardizing these visualization dashboards for each type of application. Thus, it improves the system's productivity and vastly saves time and resources in deploying monitored applications, upgrading Kubernetes resources for each application deployed, and migration of applications.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Chunmao Jiang ◽  
Peng Wu

The container scaling mechanism, or elastic scaling, means the cluster can be dynamically adjusted based on the workload. As a typical container orchestration tool in cloud computing, Horizontal Pod Autoscaler (HPA) automatically adjusts the number of pods in a replication controller, deployment, replication set, or stateful set based on observed CPU utilization. There are several concerns with the current HPA technology. The first concern is that it can easily lead to untimely scaling and insufficient scaling for burst traffic. The second is that the antijitter mechanism of HPA may cause an inadequate number of onetime scale-outs and, thus, the inability to satisfy subsequent service requests. The third concern is that the fixed data sampling time means that the time interval for data reporting is the same for average and high loads, leading to untimely and insufficient scaling at high load times. In this study, we propose a Double Threshold Horizontal Pod Autoscaler (DHPA) algorithm, which fine-grained divides the scale of events into three categories: scale-out, no scale, and scale-in. And then, on the scaling strength, we also employ two thresholds that are further subdivided into no scaling (antijitter), regular scaling, and fast scaling for each of the three cases. The DHPA algorithm determines the scaling strategy using the average of the growth rates of CPU utilization, and thus, different scheduling policies are adopted. We compare the DHPA with the HPA algorithm under different loads, including low, medium, and high. The experiments show that the DHPA algorithm has better antijitter and antiload characteristics in container increase and reduction while ensuring service and cluster security.


2021 ◽  
Vol 10 (5) ◽  
pp. 2742-2750
Author(s):  
Hoger K. Omar ◽  
Kamal H. Jihad ◽  
Shalau F. Hussein

CPU scheduling algorithms have a significant function in multiprogramming operating systems. When the CPU scheduling is effective a high rate of computation could be done correctly and also the system will maintain in a stable state. As well as, CPU scheduling algorithms are the main service in the operating systems that fulfill the maximum utilization of the CPU. This paper aims to compare the characteristics of the CPU scheduling algorithms towards which one is the best algorithm for gaining a higher CPU utilization. The comparison has been done between ten scheduling algorithms with presenting different parameters, such as performance, algorithm’s complexity, algorithm’s problem, average waiting times, algorithm’s advantages-disadvantages, allocation way, etc. The main purpose of the article is to analyze the CPU scheduler in such a way that suits the scheduling goals. However, knowing the algorithm type which is most suitable for a particular situation by showing its full properties.


2021 ◽  
Author(s):  
Harry Larkins ◽  
Nicholas Caldwell
Keyword(s):  

2021 ◽  
Vol 13 (17) ◽  
pp. 9587
Author(s):  
Himanshi Babbar ◽  
Shalli Rani ◽  
Divya Gupta ◽  
Hani Moaiteq Aljahdali ◽  
Aman Singh ◽  
...  

Since the worldwide Internet of Things (IoT) in smart cities is becoming increasingly popular among consumers and the business community, network traffic management is a crucial issue for optimizing the IoT ’s performance in smart cities. Multiple controllers on a immense scale implement in Software Defined Networks (SDN) in integration with Internet of Things (IoT) as an emerging paradigm enhances the scalability, security, privacy, and flexibility of the centralized control plane for smart city applications. The distributed multiple controller implementation model in SDN-IoT cannot conform to the dramatic developments in network traffic which results in a load disparity between controllers, leading to higher packet drop rate, high response time, and other problems with network performance deterioration. This paper lays the foundation on the multiple distributed controller load balancing (MDCLB) algorithm on an immense-scale SDN-IoT for smart cities. A smart city is a residential street that uses information and communication technology (ICT) and the Internet of Things (IoT) to improve its citizens’ quality of living. Researchers then propose the algorithm on the unbalancing of the load using the multiple controllers based on the parameter CPU Utilization in centralized control plane. The experimental results analysis is performed on the emulator named as mininet and validated the results in ryu controller over dynamic load balancing based on Nash bargaining, efficient switch migration load balancing algorithm, efficiency aware load balancing algorithm, and proposed algorithm (MDCLB) algorithm are executed and analyzed based on the parameter CPU Utilization which ensures that the Utilization of CPU with load balancing is 20% better than the Utilization of CPU without load balancing.


2021 ◽  
pp. 1063293X2110326
Author(s):  
K Valarmathi ◽  
S Kanaga Suba Raja

Future computation of cloud datacenter resource usage is a provoking task due to dynamic and Business Critic workloads. Accurate prediction of cloud resource utilization through historical observation facilitates, effectively aligning the task with resources, estimating the capacity of a cloud server, applying intensive auto-scaling and controlling resource usage. As imprecise prediction of resources leads to either low or high provisioning of resources in the cloud. This paper focuses on solving this problem in a more proactive way. Most of the existing prediction models are based on a mono pattern of workload which is not suitable for handling peculiar workloads. The researchers address this problem by making use of a contemporary model to dynamically analyze the CPU utilization, so as to precisely estimate data center CPU utilization. The proposed design makes use of an Ensemble Random Forest-Long Short Term Memory based deep architectural models for resource estimation. This design preprocesses and trains data based on historical observation. The approach is analyzed by using a real cloud data set. The empirical interpretation depicts that the proposed design outperforms the previous approaches as it bears 30%–60% enhanced accuracy in resource utilization.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Ittai B. Muller ◽  
Stijn Meijers ◽  
Peter Kampstra ◽  
Steven van Dijk ◽  
Michel van Elswijk ◽  
...  

Abstract Background Computational tools analyzing RNA-sequencing data have boosted alternative splicing research by identifying and assessing differentially spliced genes. However, common alternative splicing analysis tools differ substantially in their statistical analyses and general performance. This report compares the computational performance (CPU utilization and RAM usage) of three event-level splicing tools; rMATS, MISO, and SUPPA2. Additionally, concordance between tool outputs was investigated. Results Log-linear relations were found between job times and dataset size in all splicing tools and all virtual machine (VM) configurations. MISO had the highest job times for all analyses, irrespective of VM size, while MISO analyses also exceeded maximum CPU utilization on all VM sizes. rMATS and SUPPA2 load averages were relatively low in both size and replicate comparisons, not nearing maximum CPU utilization in the VM simulating the lowest computational power (D2 VM). RAM usage in rMATS and SUPPA2 did not exceed 20% of maximum RAM in both size and replicate comparisons while MISO reached maximum RAM usage in D2 VM analyses for input size. Correlation coefficients of differential splicing analyses showed high correlation (β > 80%) between different tool outputs with the exception of comparisons of retained intron (RI) events between rMATS/MISO and rMATS/SUPPA2 (β < 60%). Conclusions Prior to RNA-seq analyses, users should consider job time, amount of replicates and splice event type of interest to determine the optimal alternative splicing tool. In general, rMATS is superior to both MISO and SUPPA2 in computational performance. Analysis outputs show high concordance between tools, with the exception of RI events.


2021 ◽  
Vol 6 (1) ◽  
pp. 103
Author(s):  
Hardiyan Kesuma Ramadhan ◽  
Sukma Wardhana

In the digital era and the outbreak of the COVID-19 pandemic, all activities are online. If the number of users accessing the server exceeds IT infrastructure, server down occurs. A load balancer device is required to share the traffic request load. This study compares four algorithms on Citrix ADC VPX load balancer: round-robin, least connection, least response time and least packet using GNS3. The results of testing response time and throughput parameters show that the least connection algorithm is superior. There were a 33% reduction in response time and a 53% increase in throughput. In the service hits parameter, the round-robin algorithm has the evenest traffic distribution. While least packet superior in CPU utilization with 76% reduction. So algorithm with the best response time and throughput is the least connection. The algorithm with the best service hits is round-robin. Large scale implementation is recommended using the least connection algorithm regarding response time and throughput. When emphasizing evenest distribution, use a round-robin algorithm.


2021 ◽  
Vol 4 (4) ◽  
pp. 526-546
Author(s):  
Sunday Samuel Olofintuyi ◽  
Temidayo Oluwatosin Omotehinwa ◽  
Joshua Segun Owotogbe

Quite a number of scheduling algorithms have been implemented in the past, including First Come First Served (FCFS), Shortest Job First (SJF), Priority and Round Robin (RR). However, RR seems better than others because of its impartiality during the usage of its quantum time. Despite this, there is a big challenge with respect to the quantum time to use. This is because when the quantum time is too large, it leads to FCFS, and if the quantum time is too short, it increases the number of switches from the processes. As a result of this, this paper provides a descriptive review of various algorithms that have been implemented in the past 10 years, for various quantum time in order to optimize the performance of CPU utilization. This attempt will open more research areas for researchers, serve as a reference source and articulate various algorithms that have been used in the previous years – and as such, the paper will serve as a guide for future work. This research work further suggests novel hybridization and ensemble of two or more techniques so as to improve CPU performance by decreasing the number of context switch, turnaround time, waiting time and response time and in overall increasing the throughput and CPU utilization.


Energies ◽  
2021 ◽  
Vol 14 (7) ◽  
pp. 1881
Author(s):  
Alexandre Lucas ◽  
Dimitrios Geneiatakis ◽  
Yannis Soupionis ◽  
Igor Nai-Fovino ◽  
Evangelos Kotsakis

Demand response (DR) services have the potential to enable large penetration of renewable energy by adjusting load consumption, thus providing balancing support to the grid. The success of such load flexibility provided by industry, communities, or prosumers and its integration in electricity markets, will depend on a redesign and adaptation of the current interactions between participants. New challenges are, however, bound to appear with the large scale contribution of smaller assets to flexibility, including, among others, the dispatch coordination, the validation of delivery of the DR provision, and the corresponding settlement of contracts, while assuring secured data access among interested parties. In this study we applied distributed ledger (DLT)/blockchain technology to securely track DR provision, focusing on the validation aspect, assuring data integrity, origin, fast registry, and sharing within a permissioned system, between all relevant parties (including transmission system operators (TSOs), aggregators, distribution system operators (DSOs), balance responsible parties (BRP), and prosumers). We propose a framework for DR registry and implemented it as a proof of concept on Hyperledger Fabric, using real assets in a laboratory environment, in order to study its feasibility and performance. The lab set up includes a 450 kW energy storage system, scheduled to provide DR services, upon a system operator request and the corresponding validations and verifications are done, followed by the publication on a blockchain. Results show the end to end execution time remained below 1 s, when below 32 requests/sec. The smart contract memory utilization did not surpass 1% for both active and passive nodes and the peer CPU utilization, remained below 5% in all cases simulated (3, 10, and 28 nodes). Smart Contract CPU utilization remained stable, below 1% in all cases. The performance of the implementation showed scalable results, which enables real world adoption of DLT in supporting the development of flexibility markets, with the advantages of blockchain technology.


Sign in / Sign up

Export Citation Format

Share Document