scholarly journals Mean Waiting Time in Large-Scale and Critically Loaded Power of d Load Balancing Systems

Author(s):  
Tim Hellemans ◽  
Benny Van Houdt
Author(s):  
Tim Hellemans ◽  
Benny Van Houdt

Mean field models are a popular tool used to analyse load balancing policies. In some exceptional cases the waiting time distribution of the mean field limit has an explicit form. In other cases it can be computed as the solution of a set of differential equations. In this paper we study the limit of the mean waiting time E[Wλ] as the arrival rate λ approaches 1 for a number of load balancing policies in a large-scale system of homogeneous servers which finish work at a constant rate equal to one and exponential job sizes with mean 1 (i.e. when the system gets close to instability). As E[Wλ] diverges to infinity, we scale with -log(1-λ) and present a method to compute the limit limλ-> 1- -E[Wλ]/l(1-λ). We show that this limit has a surprisingly simple form for the load balancing algorithms considered. More specifically, we present a general result that holds for any policy for which the associated differential equation satisfies a list of assumptions. For the well-known LL(d) policy which assigns an incoming job to a server with the least work left among d randomly selected servers these assumptions are trivially verified. For this policy we prove the limit is given by 1/d-1. We further show that the LL(d,K) policy, which assigns batches of K jobs to the K least loaded servers among d randomly selected servers, satisfies the assumptions and the limit is equal to K/d-K. For a policy which applies LL(di) with probability pi, we show that the limit is given by 1/ ∑i pi di - 1. We further indicate that our main result can also be used for load balancers with redundancy or memory. In addition, we propose an alternate scaling -l(pλ) instead of -l(1-λ), where pλ is adapted to the policy at hand, such that limλ-> 1- -E[Wλ]/l(1-λ)=limλ-> 1- -E[Wλ]/l(pλ), where the limit limλ-> 0+ -E[Wλ]/l(pλ) is well defined and non-zero (contrary to limλ-> 0+ -E[Wλ]/l(1-λ)). This allows to obtain relatively flat curves for -E[Wλ]/l(pλ) for λ ∈ [0,1] which indicates that the low and high load limits can be used as an approximation when λ is close to one or zero. Our results rely on the earlier proven ansatz which asserts that for certain load balancing policies the workload distribution of any finite set of queues becomes independent of one another as the number of servers tends to infinity.


Author(s):  
Shailendra Raghuvanshi ◽  
Priyanka Dubey

Load balancing of non-preemptive independent tasks on virtual machines (VMs) is an important aspect of task scheduling in clouds. Whenever certain VMs are overloaded and remaining VMs are under loaded with tasks for processing, the load has to be balanced to achieve optimal machine utilization. In this paper, we propose an algorithm named honey bee behavior inspired load balancing, which aims to achieve well balanced load across virtual machines for maximizing the throughput. The proposed algorithm also balances the priorities of tasks on the machines in such a way that the amount of waiting time of the tasks in the queue is minimal. We have compared the proposed algorithm with existing load balancing and scheduling algorithms. The experimental results show that the algorithm is effective when compared with existing algorithms. Our approach illustrates that there is a significant improvement in average execution time and reduction in waiting time of tasks on queue using workflowsim simulator in JAVA.


2021 ◽  
Vol 108 (Supplement_2) ◽  
Author(s):  
Z Hayat ◽  
E Kinene ◽  
S Molloy

Abstract Introduction Reduction of waiting times is key to delivering high quality, efficient health care. Delays experienced by patients requiring radiographs in orthopaedic outpatient clinics are well recognised. Method To establish current patient and staff satisfaction, questionnaires were circulated over a two-week period. Waiting time data was retrospectively collected including appointment time, arrival time and the time at which radiographs were taken. Results 84% (n = 16) of radiographers believed patients would be dissatisfied. However, of the 296 patients questioned, 56% (n = 165) were satisfied. Most patients (89%) felt the waiting time should be under 30 minutes. Only 36% were seen in this time frame. There was moderate negative correlation (R=-0.5); higher waiting times led to increased dissatisfaction. Mean waiting time was 00:37 and the maximum 02:48. Key contributing factors included volume of patients, staff shortages (73.7%), equipment shortages (57.9%) and incorrectly filled request forms. Eight (42.1%) had felt unwell from work related stress. Conclusions A concerted effort is needed to improve staff and patient opinion. There is scope for change post COVID. Additional training and exploring ways to avoid overburdening the department would benefit. Numerous patients were open to different days or alternative sites. Funding requirements make updating equipment, expanding the department and recruiting more staff challenging.


2002 ◽  
Vol 18 (3) ◽  
pp. 611-618
Author(s):  
Markus Torkki ◽  
Miika Linna ◽  
Seppo Seitsalo ◽  
Pekka Paavolainen

Objectives: Potential problems concerning waiting list management are often monitored using mean waiting times based on empirical samples. However, the appropriateness of mean waiting time as an indicator of access can be questioned if a waiting list is not managed well, e.g., if the queue discipline is violated. This study was performed to find out about the queue discipline in waiting lists for elective surgery to reveal potential discrepancies in waiting list management. Methods: There were 1,774 waiting list patients for hallux valgus or varicose vein surgery or sterilization. The waiting time distributions of patients receiving surgery and of patients still waiting for an operation are presented in column charts. The charts are compared with two model charts. One model chart presents a high queue discipline (first in—first out) and another a poor queue discipline (random) queue. Results: There were significant differences in waiting list management across hospitals and patient categories. Examples of a poor queue discipline were found in queues for hallux valgus and varicose vein operations. Conclusions: A routine waiting list reporting should be used to guarantee the quality of waiting list management and to pinpoint potential problems in access. It is important to monitor not only the number of patients in the waiting list but also the queue discipline and the balance between demand and supply of surgical services. The purpose for this type of reporting is to ensure that the priority setting made at health policy level also works in practise.


Author(s):  
Hitomi Tamura ◽  
Masato Uchida ◽  
Masato Tsuru ◽  
Jun'ichi Shimada ◽  
Takeshi Ikenaga ◽  
...  

2017 ◽  
Vol 2017 (2) ◽  
pp. 74-94 ◽  
Author(s):  
Aaron Johnson ◽  
Rob Jansen ◽  
Nicholas Hopper ◽  
Aaron Segal ◽  
Paul Syverson

Abstract We present PeerFlow, a system to securely load balance client traffic in Tor. Security in Tor requires that no adversary handle too much traffic. However, Tor relays are run by volunteers who cannot be trusted to report the relay bandwidths, which Tor clients use for load balancing. We show that existing methods to determine the bandwidths of Tor relays allow an adversary with little bandwidth to attack large amounts of client traffic. These methods include Tor’s current bandwidth-scanning system, TorFlow, and the peer-measurement system EigenSpeed. We present an improved design called PeerFlow that uses a peer-measurement process both to limit an adversary’s ability to increase his measured bandwidth and to improve accuracy. We show our system to be secure, fast, and efficient. We implement PeerFlow in Tor and demonstrate its speed and accuracy in large-scale network simulations.


1981 ◽  
Vol 11 (1) ◽  
pp. 99-104 ◽  
Author(s):  
C. H. Meng

The purpose of this study is to develop analytical formulae for special queuing situations which occur during the operations of the felling and processing devices of a tree harvester, and the pickup and processing devices of a tree processor. Analytical formulae are used to estimate mean waiting time and mean idle time; in case 1 both "input" times and processing times are normally distributed; in case 2 "input" times are normally distributed and processing times are Poisson distributed. "Input" time is a term used for convenience to denote time required to fell a tree by a harvester or time required to pick up a tree by a processor. Methods of choosing distributions for representing "input" times and processing times are provided. In addition, there are two examples, using historical data, which demonstrate the applications of the analytical formulae.


Author(s):  
Gengbin Zheng ◽  
Abhinav Bhatelé ◽  
Esteban Meneses ◽  
Laxmikant V. Kalé

Large parallel machines with hundreds of thousands of processors are becoming more prevalent. Ensuring good load balance is critical for scaling certain classes of parallel applications on even thousands of processors. Centralized load balancing algorithms suffer from scalability problems, especially on machines with a relatively small amount of memory. Fully distributed load balancing algorithms, on the other hand, tend to take longer to arrive at good solutions. In this paper, we present an automatic dynamic hierarchical load balancing method that overcomes the scalability challenges of centralized schemes and longer running times of traditional distributed schemes. Our solution overcomes these issues by creating multiple levels of load balancing domains which form a tree. This hierarchical method is demonstrated within a measurement-based load balancing framework in Charm++. We discuss techniques to deal with scalability challenges of load balancing at very large scale. We present performance data of the hierarchical load balancing method on up to 16,384 cores of Ranger (at the Texas Advanced Computing Center) and 65,536 cores of Intrepid (the Blue Gene/P at Argonne National Laboratory) for a synthetic benchmark. We also demonstrate the successful deployment of the method in a scientific application, NAMD, with results on Intrepid.


Sign in / Sign up

Export Citation Format

Share Document