scholarly journals ANALYTICAL EVALUATION OF PROCESSING TIME OF A QUERY IN AN INFORMATION SYSTEM

Author(s):  
M. M. Butaev ◽  
A. A. Tarasov

The normal distribution of a random variable is usually used in studies of the probabilistic characteristics of information systems. However, the approximation by the normal distribution of distributions determined on a limited interval distorts the physical meaning of the model and the numerical results, and it can only be used as an initial approximation. The aim of the work is to improve the methods for calculating the probabilistic characteristics of information systems. The object of the study is an analytical method for calculating the processing time of the query in the system. The subject of the study are formulas for calculating the duration of sequential processing of the query by elements of the system with uniformly distributed random processing times. In deriving the formulas for calculating the probability characteristics of a sum of independent uniformly distributed random variables, the methods of the theory of probability and statistics are applied. It is proposed for random variables, determined only on the positive coordinate axis, to use finite-interval distribution laws, for example, beta distribution. Density formulas and probability functions for sums of two, three and four independent uniformly distributed random variables are derived.

Author(s):  
M. M. Butaev

The normal distribution of a random variable is usually used in studies of the probabilistic characteristics of information systems. However, its use to approximate distributions defined on a limited interval distorts the physical meaning of the model and the numerical results, so it can only be used as an initial approximation. The purpose of the work is the improvement of calculation methods the probabilistic characteristics of information systems. The object of the research is an analytical method for calculating the processing time of a query in the system, the subject is a formula for calculating the duration of a sequential processing of a query by system elements with uniformly distributed random processing times. When deriving formulas for calculating the probability characteristics of a sum of independent uniformly distributed random variables, methods of probability theory are used. For random variables determined only on the positive axis of coordinates, it is proposed to use finite-interval distribution laws, for example, beta distribution. Formulas probability density function and cumulative distribution function for sums of two, three, and four independent uniformly distributed random variables are derived.


Author(s):  
M.Yu. Babich ◽  
◽  
M.M. Butaev ◽  
A.A. Tarasov ◽  
A.I. Ivanov ◽  
...  

The normal distribution of a random variable is usually used in studies of the probabilistic properties of information systems. Using the normal distribution to approximate the distributions determined over a bounded distorts the physical meaning of the model and the numerical results obtained can only be used as an initial approximation. The purpose of the work is to improve methods for calculating the probability properties of infocommunication systems. The object of study is an analytical method for calculating the request processing time in the system, the subject is the formula for calculating the duration of sequential processing of a request by elements of the system with uniformly distributed independent random processing times. For positive random variables, it is proposed to use finite-interval distribution laws, for example, beta distribution. Density formulas and probability functions for the sums of two, three, and four independent randomly distributed variables are given.


Author(s):  
Cepi Ramdani ◽  
Indah Soesanti ◽  
Sunu Wibirama

Fuzzy C Means algorithm or FCM is one of many clustering algorithms that has better accuracy to solve problems related to segmentation. Its application is almost in every aspects of life and many disciplines of science. However, this algorithm has some shortcomings, one of them is the large amount of processing time consumption. This research conducted mainly to do an analysis about the effect of segmentation parameters towards processing time in sequential and parallel. The other goal is to reduce the processing time of segmentation process using parallel approach. Parallel processing applied on Nvidia GeForce GT540M GPU using CUDA v8.0 framework. The experiment conducted on natural RGB color image sized 256x256 and 512x512. The settings of segmentation parameter values were done as follows, weight in range (2-3), number of iteration (50-150), number of cluster (2-8), and error tolerance or epsilon (0.1 – 1e-06). The results obtained by this research as follows, parallel processing time is faster 4.5 times than sequential time with similarity level of image segmentations generated both of processing types is 100%. The influence of segmentation parameter values towards processing times in sequential and parallel can be concluded as follows, the greater value of weight parameter then the sequential processing time becomes short, however it has no effects on parallel processing time. For iteration and cluster parameters, the greater their values will make processing time consuming in sequential and parallel become large. Meanwhile the epsilon parameter has no effect or has an unpredictable tendency on both of processing time.


2005 ◽  
Vol 2005 (5) ◽  
pp. 717-728 ◽  
Author(s):  
K. Neammanee

LetX1,X2,…,Xnbe independent Bernoulli random variables withP(Xj=1)=1−P(Xj=0)=pjand letSn:=X1+X2+⋯+Xn.Snis called a Poisson binomial random variable and it is well known that the distribution of a Poisson binomial random variable can be approximated by the standard normal distribution. In this paper, we use Taylor's formula to improve the approximation by adding some correction terms. Our result is better than before and is of order1/nin the casep1=p2=⋯=pn.


2007 ◽  
Vol 21 (4) ◽  
pp. 579-595 ◽  
Author(s):  
Michael Pinedo

Consider a single machine that can process multiple jobs in batch mode. We havenjobs and the processing time of jobjis a random variableXjwith distributionFj. Up tobjobs can be processed simultaneously by the machine. The jobs in a batch all have to start at the same time and the batch is completed when all jobs have finished their processing (i.e., at the maximum of the processing times of the jobs in that batch). We are interested in two objective functions, namely the minimization of the expected makespan and the minimization of the total expected completion time. We first show that under certain fairly general conditions, the minimization of the expected makespan is equivalent to specific deterministic combinatorial problems, namely the Weighted Matching problem and the Set Partitioning problem. We then consider the case when all jobs have the same mean processing time but different variances. We show that for certain special classes of processing time distributions theSmallest Variance Firstrule minimizes the expected makespan as well as the total expected completion time. In our conclusions we present various general rules that are suitable for the minimization of the expected makespan and the total expected completion time in batch scheduling.


1985 ◽  
Vol 22 (1) ◽  
pp. 240-246 ◽  
Author(s):  
E. Frostig ◽  
I. Adiri

This paper deals with special cases of stochastic flowshop, no-wait, scheduling. n jobs have to be processed by m machines . The processing time of job Ji on machine Mj is an independent random variable Ti. It is possible to sequence the jobs so that , . At time 0 the realizations of the random variables Ti, (i are known. For m (m ≧ 2) machines it is proved that a special SEPT–LEPT sequence minimizes the expected schedule length; for two (m = 2) machines it is proved that the SEPT sequence minimizes the expected sum of completion times.


1985 ◽  
Vol 22 (3) ◽  
pp. 739-744 ◽  
Author(s):  
Michael Pinedo ◽  
Zvi Schechner

Consider n jobs and m machines. The m machines are identical and set up in parallel. All n jobs are available at t = 0 and each job has to be processed on one of the machines; any one can do. The processing time of job j is Xj, a random variable with distribution Fj. The sequence in which the jobs start with their processing is predetermined and preemptions are not allowed. We investigate the effect of the variability of the processing times on the expected makespan and the expected time to first idleness. Bounds are presented for these quantities in case the distributions of the processing times of the jobs are new better (worse) than used.


2021 ◽  
pp. 111-122
Author(s):  
Степан Алексеевич Рогонов ◽  
Илья Сергеевич Солдатенко

Анализ поведения случайных величин после различных преобразований можно применять при решении многих нетривиальных задач. В частности, решения, которые невозможно выразить аналитически, с точки зрения практической применимости способны давать результаты с точностью, достаточной для вычислений, вынося невыразимую невязку аналитического решения далеко за рамки требуемой погрешности. В настоящей работе исследовано поведение модуля нормально распределенной случайной величины и выяснено, при каких условиях можно пренебречь операцией взятия абсолютного значения и аппроксимировать модуль случайной величины {\it похожим} распределением вероятностей. The analysis of the behavior of random variables after various transformations can be used in the practical solution of many non-trivial problems. In particular, solutions that cannot be expressed purely analytically, from the point of view of practical applicability, are able to give results with accuracy sufficient for real calculations, taking the inexpressible discrepancy of the analytical solution far beyond the actual error. In this paper, the behavior of the modulus of a normally distributed random variable is investigated and it is found out under what conditions it is possible to neglect the operation of taking an absolute value and approximate the modulus of a random variable with a {\it similar} probability distribution.


Psychology ◽  
2021 ◽  
Author(s):  
Zhiyong Zhang ◽  
Wen Qu

In statistics, kurtosis is a measure of the probability distribution of a random variable or a vector of random variables. As mean measures the centrality and variance measures the spreadness of a probability distribution, kurtosis measures the tailedness of the distribution. Kurtosis for a univariate distribution was first introduced by Karl Pearson in 1905. Kurtosis, together with skewness, is widely used to quantify the non-normality—the deviation from a normal distribution—of a distribution. In psychology, kurtosis has often been studied in the field of quantitative psychology to evaluate its effects on psychometric models.


Sign in / Sign up

Export Citation Format

Share Document