THROUGHPUT AND BOTTLENECK ANALYSIS OF TANDEM QUEUES WITH NESTED SESSIONS

2017 ◽  
Vol 32 (3) ◽  
pp. 396-408
Author(s):  
A. Hristov ◽  
J.W. Bosman ◽  
R.D. van der Mei ◽  
S. Bhulai

Various types of systems across a broad range of disciplines contain tandem queues with nested sessions. Strong dependence between the servers has proved to make such networks complicated and difficult to study. Exact analysis is in most of the cases intractable. Moreover, even when performance metrics such as the saturation throughput and the utilization rates of the servers are known, determining the limiting factor of such a network can be far from trivial. In our work, we present a simple, tractable and nevertheless relatively accurate method for approximating the above mentioned performance measurements for any server in a given network. In addition, we propose an extension to the intuitive “slowest server rule” for identification of the bottleneck, and show through extensive numerical experiments that this method works very well.

Author(s):  
Nadja Yang Meng ◽  
Karthikeyan K

Performance benchmarking and performance measurement are the fundamental principles of performance enhancement in the business sector. For businesses to enhance their performance in the modern competitive world, it is fundamental to know how to measure the performance level in business that also incorporates telling how they will performance after a change has been made. In case a business improvement has been made, the performance processes have to be evaluated. Performance measurements are also fundamental in the process of doing comparisons of performance levels between corporations. The best practices within the industry are evaluated by the businesses with desirable levels of the kind of performance measures being conducted. In that regard, it is fundamental if similar businesses applied the same collection of performance metrics. In this paper, the NETIAS performance measurement framework will be applied to accomplish the mission of evaluating performances in business by producing generic collection of performance metrics, which businesses can utilize to compare and measure their organizational activities.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 5019 ◽  
Author(s):  
Jiang ◽  
Chalich ◽  
Deen

Positron emission tomography (PET) imaging is an essential tool in clinical applications for the diagnosis of diseases due to its ability to acquire functional images to help differentiate between metabolic and biological activities at the molecular level. One key limiting factor in the development of efficient and accurate PET systems is the sensor technology in the PET detector. There are generally four types of sensor technologies employed: photomultiplier tubes (PMTs), avalanche photodiodes (APDs), silicon photomultipliers (SiPMs), and cadmium zinc telluride (CZT) detectors. PMTs were widely used for PET applications in the early days due to their excellent performance metrics of high gain, low noise, and fast timing. However, the fragility and bulkiness of the PMT glass tubes, high operating voltage, and sensitivity to magnetic fields ultimately limit this technology for future cost-effective and multi-modal systems. As a result, solid-state photodetectors like the APD, SiPM, and CZT detectors, and their applications for PET systems, have attracted lots of research interest, especially owing to the continual advancements in the semiconductor fabrication process. In this review, we study and discuss the operating principles, key performance parameters, and PET applications for each type of sensor technology with an emphasis on SiPM and CZT detectors—the two most promising types of sensors for future PET systems. We also present the sensor technologies used in commercially available state-of-the-art PET systems. Finally, the strengths and weaknesses of these four types of sensors are compared and the research challenges of SiPM and CZT detectors are discussed and summarized.


2019 ◽  
Vol 2 (2) ◽  
pp. 44-56 ◽  
Author(s):  
Amjad Hudaib ◽  
Layla Albdour

Due to centralized nature for cloud computing and some other reasons, high mobility cannot be supported and low latency requirements for some applications such as Internet of Things (IoT) that require real time and mobility support. To satisfy such requirements new technologies, fog computing is a good solution, where we use edges of network for service provisioning instead of far datacenters allocated in clouds. Low latency response is the most attractive property for fog computing, which is very suitable for IoT multi-billion devices, sensors and actuators generates huge amount of data that need processing and analysis for smart decision generation. The main objective of this article is to show the super ability of fog computing over cloud-only computing. The authors present a patient monitoring system as a case study for simulation; they evaluated the performance of the system using: latency, network usage, power consumption, cost of execution and simulation execution time performance metrics. The results show that the Fog computing is superior over Cloud-only paradigm in all performance measurements.


2021 ◽  
Vol 349 ◽  
pp. 01008
Author(s):  
Nikolaos A. Fountas ◽  
Ioannis Papantoniou ◽  
John D. Kechagias ◽  
Dimitrios E. Manolakos ◽  
Nikolaos M. Vaxevanidis

The properties of fused deposition modeling (FDM) products exhibit strong dependence on process parameters which may be improved by setting suitable levels for parameters related to FDM. Anisotropic and brittle nature of 3D-printed components makes it essential to investigate the effect of FDM control parameters to different performance metrics related to resistance for improving strength of functional parts. In this work the flexural strength of polyethylene terephthalate glycol (PET-G) is examined under by altering the levels of different 3D-printing parameters such as layer height, infill density, deposition angle, printing speed and printing temperature. A response surface experiment was established having 27 experimental runs to obtain the results for flexural strength (MPa) and to further investigate the effect of each control parameter on the response by studying the results using statistical analysis. The experiments were conducted as per the ASTM D790 standard. The regression model generated for flexural strength adequately explains the variation of FDM control parameters on flexural strength and thus, it can be implemented to find optimal parameter settings with the use of either an intelligent algorithm, or neural network.


2014 ◽  
Vol 7 (5) ◽  
pp. 6549-6627
Author(s):  
M. Righi ◽  
V. Eyring ◽  
K.-D. Gottschaldt ◽  
C. Klinger ◽  
F. Frank ◽  
...  

Abstract. Four simulations with the ECHAM/MESSy Atmospheric Chemistry (EMAC) model have been evaluated with the Earth System Model Validation Tool (ESMValTool) to identify differences in simulated ozone and selected climate parameters that resulted from (i) different setups of the EMAC model (nudged vs. free-running) and (ii) different boundary conditions (emissions, sea surface temperatures (SSTs) and sea-ice concentrations (SICs)). To assess the relative performance of the simulations, quantitative performance metrics are calculated consistently for the climate parameters and ozone. This is important for the interpretation of the evaluation results since biases in climate can impact on biases in chemistry and vice versa. The observational datasets used for the evaluation include ozonesonde and aircraft data, meteorological reanalyses and satellite measurements. The results from a previous EMAC evaluation of a model simulation with weak nudging towards realistic meteorology in the troposphere have been compared to new simulations with different model setups and updated emission datasets in free-running timeslice and nudged Quasi Chemistry-Transport Model (QCTM) mode. The latter two configurations are particularly important for chemistry-climate projections and for the quantification of individual sources (e.g. transport sector) that lead to small chemical perturbations of the climate system, respectively. With the exception of some specific features which are detailed in this study, no large differences that could be related to the different setups of the EMAC simulations (nudged vs. free-running) were found, which offers the possibility to evaluate and improve the overall model with the help of shorter nudged simulations. The main differences between the two setups is a better representation of the tropospheric and stratospheric temperature in the nudged simulations, which also better reproduce stratospheric water vapour concentrations, due to the improved simulation of the temperature in the tropical tropopause layer. Ozone and ozone precursor concentrations on the other hand are very similar in the different model setups, if similar boundary conditions are used. Different boundary conditions however lead to relevant differences in the four simulations. SSTs and SICs, which are prescribed in all simulations, play a key role in the representation of the ozone hole, which is significantly underestimated in some experiments. A bias that is present in all simulations is an overestimation of tropospheric column ozone, which is significantly reduced when lower lightning emissions of nitrogen oxides are used. To further investigate possible other reasons for such bias, two sensitivity simulations with an updated scavenging routine and the addition of a newly proposed HNO3-forming channel of the HO2+ NO reaction were performed. The update in the scavenging routine resulted in a slightly better representation of ozone compared to the reference simulation. The introduction of the new HNO3-forming channel significantly reduces this bias. Therefore, including the new reaction rate could potentially be important for a realistic simulation of tropospheric ozone, although laboratory experiments and other models studies need to confirm this hypothesis and some modifications to the rate, which has a strong dependence on water vapour, might also still be needed.


Author(s):  
Edson Pindza ◽  
Jules Clement Mba ◽  
Eben Maré ◽  
Désirée Moubandjo

Abstract:Evolution equations containing fractional derivatives can provide suitable mathematical models for describing important physical phenomena. In this paper, we propose an accurate method for numerical solutions of multi-dimensional time-fractional heat equations. The proposed method is based on a fractional exponential integrator scheme in time and the Lagrange regularized kernel method in space. Numerical experiments show the effectiveness of the proposed approach.


1997 ◽  
Vol 34 (1) ◽  
pp. 248-257 ◽  
Author(s):  
Ushio Sumita ◽  
Yasushi Masuda

A system of GIx/G/∞ queues in tandem is considered where the service times of a customer are correlated but the service time vectors for customers are independently and identically distributed. It is shown that the binomial moments of the joint occupancy distribution can be generated by a sequence of renewal equations. The distribution of the joint occupancy level is then expressed in terms of the binomial moments. Numerical experiments for a two-station tandem queueing system demonstrate a somewhat counterintuitive result that the transient covariance of the joint occupancy level decreases as the covariance of the service times increases. It is also shown that the analysis is valid for a network of GIx/SM/∞ queues.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1331
Author(s):  
George K. Adam ◽  
Nikos Petrellis ◽  
Lambros T. Doulos

This work investigates the real-time performance of Linux kernels and distributions with a PREEMPT_RT real-time patch on ARM-based embedded devices. Experimental measurements, which are mainly based on heuristic methods, provide novel insights into Linux real-time performance on ARM-based embedded devices (e.g., BeagleBoard and RaspberryPi). Evaluations of the Linux real-time performance are based on specific real-time software measurement modules, developed for this purpose, and the use of a standard benchmark tool, cyclictest. Software modules were designed upon the introduction of a new response task model, an innovative aspect of this work. Measurements include the latency of response tasks at user and kernel space, the response on the execution of periodic tasks, the maximum sustained frequency and general latency performance metrics. The results show that in such systems the PREEMPT_RT patch provides more improved real-time performance than the default Linux kernels. The latencies and particularly the worst-case latencies are reduced with real-time support, thus making such devices running Linux with PREEMPT_RT more appropriate for use in time-sensitive embedded control systems and applications. Furthermore, the proposed performance measurements approach and evaluation methodology could be applied and deployed on other Linux-based real-time platforms.


Information ◽  
2021 ◽  
Vol 12 (8) ◽  
pp. 319
Author(s):  
Denys Klochkov ◽  
Jan Mulawka

The evolution of web development and web applications has resulted in creation of numerous tools and frameworks that facilitate the development process. Even though those frameworks make web development faster and more efficient, there are certain downsides to using them. A decrease in application performance when using an “off the shelf” framework might be a crucial disadvantage, especially given the vital role web application response time plays in user experience. This contribution focuses on a particular framework—Ruby on Rails. Once the most popular framework, it has now lost its leading position, partially due to slow performance metrics and response times, especially in larger applications. Improving and expanding upon the previous work in this field, an attempt to improve the response time of a specially developed benchmark application is made. This is achieved by performing optimizations that can be roughly divided into two groups. The first group concerns the frontend improvements, which include: adopting the client-side rendering, JavaScript Document Object Model (DOM) manipulation and asynchronous requests. Another group can be described as the backend improvements, which include implementing intelligent, granular caching, disabling redundant modules, as well as profiling and optimizing database requests and reducing database access inefficiencies. Those improvements resulted in overall up to 74% decreased page loading times, with perceived application performance being improved above this mark due to the adoption of a client-side rendering strategy. Using the different metrics of application performance measurements, each of the improvement steps is evaluated with regards to its effect on different aspects of overall performance. In conclusion, this work presents a way to significantly decrease the response time of a particular Ruby on Rails application and simultaneously provide a better user experience. Even though the majority of this process is specific to Rails, similar steps can be taken to improve applications implemented with the use of other similar frameworks. As the result of the work, a groundwork is laid for the development of the tool that could assist the developers in improving their applications as well.


Sign in / Sign up

Export Citation Format

Share Document