Scheduling and stability aspects of a general class of parallel processing systems

1993 ◽  
Vol 25 (1) ◽  
pp. 176-202 ◽  
Author(s):  
Nicholas Bambos ◽  
Jean Walrand

In this paper we study the following general class of concurrent processing systems. There are several different classes of processors (servers) and many identical processors within each class. There is also a continuous random flow of jobs, arriving for processing at the system. Each job needs to engage concurrently several processors from various classes in order to be processed. After acquiring the needed processors the job begins to be executed. Processing is done non-preemptively, lasts for a random amount of time, and then all the processors are released simultaneously. Each job is specified by its arrival time, its processing time, and the list of processors that it needs to access simultaneously. The random flow (sequence) of jobs has a stationary and ergodic structure. There are several possible policies for scheduling the jobs on the processors for execution; it is up to the system designer to choose the scheduling policy to achieve certain objectives.We focus on the effect that the choice of scheduling policy has on the asymptotic behavior of the system at large times and especially on its stability, under general stationary and ergocic input flows.

1993 ◽  
Vol 25 (01) ◽  
pp. 176-202 ◽  
Author(s):  
Nicholas Bambos ◽  
Jean Walrand

In this paper we study the following general class of concurrent processing systems. There are several different classes of processors (servers) and many identical processors within each class. There is also a continuous random flow of jobs, arriving for processing at the system. Each job needs to engage concurrently several processors from various classes in order to be processed. After acquiring the needed processors the job begins to be executed. Processing is done non-preemptively, lasts for a random amount of time, and then all the processors are released simultaneously. Each job is specified by its arrival time, its processing time, and the list of processors that it needs to access simultaneously. The random flow (sequence) of jobs has a stationary and ergodic structure. There are several possible policies for scheduling the jobs on the processors for execution; it is up to the system designer to choose the scheduling policy to achieve certain objectives. We focus on the effect that the choice of scheduling policy has on the asymptotic behavior of the system at large times and especially on its stability, under general stationary and ergocic input flows.


Author(s):  
Ravi Mahadevan ◽  
Neelamegam Anbazhagan

<span>Online Nowadays, the enterprises &amp; individuals are contributing their workloads on cloud service providers which are going to increase on daily basis. There are   large amount CSP are available to offer virtualized and dynamic resource on pay and use basis. However, there are almost CSP failed to maintain quality of service (QOS) and minimal resource optimization. Some of the existing approaches are highly dedicated on scheduling policy but, it does not considered reliable services with optimized QOS. To offer best solution of above problem, the framework proposes Enhanced Minimal Resource Optimization based Scheduling Algorithm to minimize the resources and maintain the QOS.  The method avoids delay in Request-Response model in cloud environment. To avoid overload for resource allocation, the proposed design utilized optimized scheduling policy.  Proposed mechanisms utilized optimized service brokering policy to reduce the delay response in cloud environment. The framework also help cloud user to prefer best CSP according to their prior services. The method offers rising trend of resource based structure to reduce the placement churn extensively. Proposed system utilized efficient scheduling policy to transmit data request to CSP with minimal data processing time. The entire utilization is to improve the QOS of cloud service provider in the features of multi-dimensional resource. Based on experimental evaluations, proposed technique improves the CPT (Computation Processing Time) 301.72 milliseconds, BU (Bandwidth Utilization) 20 Mbps, CPUU (CPU Utilization) 5% &amp; MRU (Memory Resource Utilization) 3% on given input parameters compare than existing methodology.</span>


Author(s):  
Cepi Ramdani ◽  
Indah Soesanti ◽  
Sunu Wibirama

Fuzzy C Means algorithm or FCM is one of many clustering algorithms that has better accuracy to solve problems related to segmentation. Its application is almost in every aspects of life and many disciplines of science. However, this algorithm has some shortcomings, one of them is the large amount of processing time consumption. This research conducted mainly to do an analysis about the effect of segmentation parameters towards processing time in sequential and parallel. The other goal is to reduce the processing time of segmentation process using parallel approach. Parallel processing applied on Nvidia GeForce GT540M GPU using CUDA v8.0 framework. The experiment conducted on natural RGB color image sized 256x256 and 512x512. The settings of segmentation parameter values were done as follows, weight in range (2-3), number of iteration (50-150), number of cluster (2-8), and error tolerance or epsilon (0.1 – 1e-06). The results obtained by this research as follows, parallel processing time is faster 4.5 times than sequential time with similarity level of image segmentations generated both of processing types is 100%. The influence of segmentation parameter values towards processing times in sequential and parallel can be concluded as follows, the greater value of weight parameter then the sequential processing time becomes short, however it has no effects on parallel processing time. For iteration and cluster parameters, the greater their values will make processing time consuming in sequential and parallel become large. Meanwhile the epsilon parameter has no effect or has an unpredictable tendency on both of processing time.


1999 ◽  
Vol 22 (4) ◽  
pp. 691-692 ◽  
Author(s):  
Robert M. McPeek ◽  
Edward L. Keller ◽  
Ken Nakayama

We summarize several experiments indicating that the saccadic system is capable of simultaneously programming two movements toward different goals. This concurrent processing of saccades can lead to the execution of two saccades separated by an extremely short intersaccadic interval. This supports the idea of target competition proposed in Findlay & Walker's article, but suggests a greater degree of parallel processing. We provide evidence that concurrent processing of two saccades is not limited to higher-level planning subsystems; rather, it also involves both regions close enough to the motor output that it can systematically affect saccade trajectory.


2004 ◽  
Vol 14 (02) ◽  
pp. 255-270 ◽  
Author(s):  
JEMAL H. ABAWAJY

Cluster computing has come to prominence as a cost-effective parallel processing tool for solving many complex computational problems. In this paper, we propose a new timesharing opportunistic scheduling policy to support remote batch job executions over networked clusters to be used in conjunction with the Condor Up-Down scheduling algorithm. We show that timesharing approaches can be used in an opportunistic setting to improve both mean job slowdowns and mean response times with little or no throughput reduction. We also show that the proposed algorithm achieves significant improvement in job response time and slowdown as compared to exiting approaches and some recently proposed new approaches.


Transforming large amounts of data takes a lot of processing time so that the optimization technique is required. One way that can be used to perform optimization is multithreading. Nowadays, processor is proliferating. The average processor in community is multi-core processor that can do parallel processing. Prior to the emergence of Web Workers, JavaScript is a poor programming language for parallel programming. The emergence of Web Workers allows JavaScript to do a better job in parallel programming. Fork Join Pool is a method that implements the Divide and Conquers algorithm, so it is suitable for the use in multithreading. This data transformation library was created by implementing the ForkJoinPool method using Web Workers technology in JavaScript. This program is written in JavaScript and HTML language. Based on results of testing phase that has been done, it is proven that ForkJoinPool method can be implemented using Web Workers technology in JavaScript as a data transformation library. In addition, it can be concluded that the data transformation library usage affects the speed of data transformation which depends on the data transformation complexity. The higher the complexity of data transformation performed, the effectiveness in the use of data transformation libraries will increase.


Author(s):  
Midriem Mirdanies

Multi-object recognition software on Remote Controlled Weapon Station (RCWS) had been implemented in previous paper using Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF) methods, but the processing time in one cycle is quite slow so it is need to be optimized using parallel processing. In this paper, implementation of parallel processing on multi-object recognition software has been done on a multicore processor. The Openmp Application Programming Interface (API), C programming language, and Visual studio Integrated Development Environment (IDE) is used to implement the parallel processing in this paper. The parallel processing was implemented in the for loop of the matching process between the capturing object from the camera and the database under two conditions, i.e., the original of the for loop syntax and after optimization of the for loop syntax. Experiments have been done on the core processor i7-4790 @ 3.60Ghz, 8 GB DDR3 of memory, windows 8.1 os using two, four, six, and eight cores to recognize one, two, three and four objects at once using SIFT and SURF methods. Based on the experiments, it was found that the processing time in parallel is faster than sequential process, where the fastest of the processing time is obtained after optimization in the loop syntax, with the processing time in recognizing one to four objects using SIFT method is 927.13 ms (8 core), 1019.31 ms (6 core), 1190.72 ms (8 core), and 1283.05 ms (4 core), where the sequential processing time in recognizing one to four objects is 1067.35 ms, 1164.78 ms, 1352.93 ms, and 1497.35 ms, while the processing time in recognizing one to four objects using SURF method is 1157.13 ms (8 core), 1517.83 ms (6 core), 1572.14 ms (4 core), dan 1472.64 ms (6 core), where the sequential processing time in recognizing one to four objects is 5635.99 ms, 6268.47 ms, 3256.63 ms, dan 3883.78 ms.


2012 ◽  
Vol 13 (01) ◽  
pp. 1250011 ◽  
Author(s):  
MATHIEU FAURE ◽  
GREGORY ROTH

A successful method to describe the asymptotic behavior of various deterministic and stochastic processes such as asymptotically autonomous differential equations or stochastic approximation processes is to relate them to an appropriately chosen limit semiflow. Benaïm and Schreiber (2000) define a general class of such stochastic processes, which they call weak asymptotic pseudotrajectories and study their ergodic behavior. In particular, they prove that the weak* limit points of the empirical measures associated to such processes are almost surely invariant for the associated deterministic semiflow. Continuing a program started by Benaïm, Hofbauer and Sorin (2005), we generalize the ergodic results mentioned above to weak asymptotic pseudotrajectories relative to set-valued dynamical systems.


Perception ◽  
1989 ◽  
Vol 18 (2) ◽  
pp. 191-200 ◽  
Author(s):  
Ehud Zohary ◽  
Shaul Hochstein

Visual search for an element defined by the conjunction of its colour and orientation has previously been shown to be a serial processing task since reaction times increase linearly with the number of distractor elements used in the display. Evidence is presented that there are parallel processing constituents to this serial search. Processing time depended on the ratio of the number of the two distractor types used, suggesting that only one type was scanned. Which type was scanned also depended on the distractor ratio, indicating that this decision was made after stimulus presentation and was based on a parallel figure—ground separation of the stimulus elements. Furthermore, in accordance with this serial scanning model, there was an increase in processing speed (elements scanned per second) with increase in number of elements to be scanned. This increased efficiency suggests that clumps of elements were processed synchronously. Under the stimulation conditions used, clumps contained six to sixteen elements and each clump was processed in 50–150 ms.


Sign in / Sign up

Export Citation Format

Share Document