On Klimov's model by two job classes and exponential processing times

1993 ◽  
Vol 30 (3) ◽  
pp. 716-724 ◽  
Author(s):  
Xiuli Chao

We consider the Klimov model for an open network of two types of jobs. Jobs of type i arrive at station i, have processing times that are exponentially distributed with parameter µi, and when processed either go on to station j with probability pij, or depart the network with probability pi0. Costs are charged at a rate that depends on the number of jobs of the two types in the system. It is shown that for arbitrary arrival processes the policy that gives priority to those jobs for whom the rate of change of the cost function is greatest minimizes the expected cost rate at every time t. This result is stronger than the Klimov result in two ways: arrival processes are arbitrary, and the minimization is at each time t. But the result holds for only two types.

1993 ◽  
Vol 30 (03) ◽  
pp. 716-724
Author(s):  
Xiuli Chao

We consider the Klimov model for an open network of two types of jobs. Jobs of type i arrive at station i, have processing times that are exponentially distributed with parameter µi , and when processed either go on to station j with probability pij , or depart the network with probability pi 0. Costs are charged at a rate that depends on the number of jobs of the two types in the system. It is shown that for arbitrary arrival processes the policy that gives priority to those jobs for whom the rate of change of the cost function is greatest minimizes the expected cost rate at every time t. This result is stronger than the Klimov result in two ways: arrival processes are arbitrary, and the minimization is at each time t. But the result holds for only two types.


1991 ◽  
Vol 23 (04) ◽  
pp. 909-924 ◽  
Author(s):  
Rhonda Righter ◽  
Susan H. Xu

We consider the problem of scheduling n jobs non-preemptively on m parallel, non-identical processors to minimize a weighted expected cost function of job completion times, where the weights are associated with the jobs. The cost function is assumed to be increasing and concave but otherwise arbitrary. Processing times are IFR with different distributions for different processors. Jobs may be processed on any processor and there are no precedences. We show that the optimal policy orders the jobs in decreasing order of their weights and then uses the individually optimal policy for each job. In other words, processors are offered to jobs in order, and each job considers its own expected cost function for its completion time to decide whether to accept or reject a processor. Therefore, the optimal policy does not depend on the weights of the jobs except through their order. Special cases of our objective function are weighted expected flowtime, weighted discounted expected flowtime, and weighted expected number of tardy jobs.


1991 ◽  
Vol 23 (4) ◽  
pp. 909-924 ◽  
Author(s):  
Rhonda Righter ◽  
Susan H. Xu

We consider the problem of scheduling n jobs non-preemptively on m parallel, non-identical processors to minimize a weighted expected cost function of job completion times, where the weights are associated with the jobs. The cost function is assumed to be increasing and concave but otherwise arbitrary. Processing times are IFR with different distributions for different processors. Jobs may be processed on any processor and there are no precedences. We show that the optimal policy orders the jobs in decreasing order of their weights and then uses the individually optimal policy for each job. In other words, processors are offered to jobs in order, and each job considers its own expected cost function for its completion time to decide whether to accept or reject a processor. Therefore, the optimal policy does not depend on the weights of the jobs except through their order. Special cases of our objective function are weighted expected flowtime, weighted discounted expected flowtime, and weighted expected number of tardy jobs.


Processes ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 2000
Author(s):  
Jin-Hwan Lee ◽  
Woo-Jung Kim ◽  
Sang-Yong Jung

This paper proposes a robust optimization algorithm customized for the optimal design of electric machines. The proposed algorithm, termed “robust explorative particle swarm optimization” (RePSO), is a hybrid algorithm that affords high accuracy and a high search speed when determining robust optimal solutions. To ensure the robustness of the determined optimal solution, RePSO employs the rate of change of the cost function. When this rate is high, the cost function appears as a steep curve, indicating low robustness; in contrast, when the rate is low, the cost function takes the form of a gradual curve, indicating high robustness. For verification, the performance of the proposed algorithm was compared with those of the conventional methods of robust particle swarm optimization and explorative particle swarm optimization with a Gaussian basis test function. The target performance of the traction motor for the optimal design was derived using a simulation of vehicle driving performance. Based on the simulation results, the target performance of the traction motor requires a maximum torque and power of 294 Nm and 88 kW, respectively. The base model, an 8-pole 72-slot permanent magnet synchronous machine, was designed considering the target performance. Accordingly, an optimal design was realized using the proposed algorithm. The cost function for this optimal design was selected such that the torque ripple, total harmonic distortion of back-electromotive force, and cogging torque were minimized. Finally, experiments were performed on the manufactured optimal model. The robustness and effectiveness of the proposed algorithm were validated by comparing the analytical and experimental results.


1996 ◽  
Vol 33 (2) ◽  
pp. 557-572 ◽  
Author(s):  
Shey-Huei Sheu

This paper considers a modified block replacement with two variables and general random minimal repair cost. Under such a policy, an operating system is preventively replaced by new ones at times kT (k= 1, 2, ···) independently of its failure history. If the system fails in [(k − 1)T, (k − 1)T+ T0) it is either replaced by a new one or minimally repaired, and if in [(k − 1) T + T0, kT) it is either minimally repaired or remains inactive until the next planned replacement. The choice of these two possible actions is based on some random mechanism which is age-dependent. The cost of the ith minimal repair of the system at age y depends on the random part C(y) and the deterministic part ci (y). The expected cost rate is obtained, using the results of renewal reward theory. The model with two variables is transformed into a model with one variable and the optimum policy is discussed.


1996 ◽  
Vol 33 (02) ◽  
pp. 557-572 ◽  
Author(s):  
Shey-Huei Sheu

This paper considers a modified block replacement with two variables and general random minimal repair cost. Under such a policy, an operating system is preventively replaced by new ones at timeskT(k=1, 2, ···) independently of its failure history. If the system fails in [(k − 1)T,(k− 1)T+T0) it is either replaced by a new one or minimally repaired, and if in [(k− 1)T+T0, kT) it is either minimally repaired or remains inactive until the next planned replacement. The choice of these two possible actions is based on some random mechanism which is age-dependent. The cost of theith minimal repair of the system at ageydepends on the random partC(y) and the deterministic partci(y). The expected cost rate is obtained, using the results of renewal reward theory. The model with two variables is transformed into a model with one variable and the optimum policy is discussed.


1994 ◽  
Vol 31 (03) ◽  
pp. 788-796 ◽  
Author(s):  
Cheng-Shang Chang ◽  
Rhonda Righter

We consider preemptive scheduling on parallel machines where the number of available machines may be an arbitrary, possibly random, function of time. Processing times of jobs are from a family of DLR (decreasing likelihood ratio) distributions, and jobs may arrive at random agreeable times. We give a constructive coupling proof to show that LEPT stochastically minimizes the makespan, and that it minimizes the expected cost when the cost function satisfies certain agreeability conditions.


1994 ◽  
Vol 31 (3) ◽  
pp. 788-796 ◽  
Author(s):  
Cheng-Shang Chang ◽  
Rhonda Righter

We consider preemptive scheduling on parallel machines where the number of available machines may be an arbitrary, possibly random, function of time. Processing times of jobs are from a family of DLR (decreasing likelihood ratio) distributions, and jobs may arrive at random agreeable times. We give a constructive coupling proof to show that LEPT stochastically minimizes the makespan, and that it minimizes the expected cost when the cost function satisfies certain agreeability conditions.


2017 ◽  
Vol 34 (6) ◽  
pp. 752-769 ◽  
Author(s):  
Alfonsus Julanto Endharta ◽  
Won Young Yun

Purpose The purpose of this paper is to develop a preventive maintenance policy with continuous monitoring for a circular consecutive-k-out-of-n: F systems. A preventive maintenance policy is developed based on the system critical condition which is related to the number of working components in the minimal cut sets of the system. If there is at least one minimal cut set which consists of only one working component, the system is maintained preventively (PM) after a certain time interval and the failed components are replaced with the new ones to prevent the system failures. If the system fails prior to the preventive maintenance, the system is correctively maintained (CM) immediately by replacing the failed components. Design/methodology/approach The mathematical function of the expected cost rate for the proposed maintenance policy is derived. The costs of PM, CM, and replacement per component are considered. The optimal maintenance parameter, which is the PM interval, is obtained by enumeration, and the numerical studies are shown with various system and cost parameters. The performance of the proposed policy is evaluated by comparing its expected cost rate to those of the no-PM and age-PM policies. The percentage of cost increase from the no-PM and age-PM policies to the proposed PM policy is calculated and this value can represents how important the continuous monitoring in this policy. Findings The proposed policy outperforms other policies. When the cost of CM is high and the cost of PM is low, the proposed PM policy is more suitable. Research limitations/implications The system consists of identical components and the component failure times follow an exponential distribution. Continuous monitoring is considered, which means that the component states can be known at any time. Three cost parameters, cost of PM, CM, and replacement per component, are considered. Originality/value This paper shows a maintenance problem for circular consecutive-k-out-of-n: F systems. Many studies on this system type focused on the reliability estimation or system design problem. Previous study with this policy (Endharta and Yun, 2015) has been developed for linear systems, although the study used a simulation approach to estimate the expected cost rate. Also, Endharta et al. (2016) considered a similar method for the different types of system, which is linear consecutive-k-out-of-n: F system.


2021 ◽  
Vol 11 (2) ◽  
pp. 850
Author(s):  
Dokkyun Yi ◽  
Sangmin Ji ◽  
Jieun Park

Artificial intelligence (AI) is achieved by optimizing the cost function constructed from learning data. Changing the parameters in the cost function is an AI learning process (or AI learning for convenience). If AI learning is well performed, then the value of the cost function is the global minimum. In order to obtain the well-learned AI learning, the parameter should be no change in the value of the cost function at the global minimum. One useful optimization method is the momentum method; however, the momentum method has difficulty stopping the parameter when the value of the cost function satisfies the global minimum (non-stop problem). The proposed method is based on the momentum method. In order to solve the non-stop problem of the momentum method, we use the value of the cost function to our method. Therefore, as the learning method processes, the mechanism in our method reduces the amount of change in the parameter by the effect of the value of the cost function. We verified the method through proof of convergence and numerical experiments with existing methods to ensure that the learning works well.


Sign in / Sign up

Export Citation Format

Share Document