scholarly journals A Novel Heuristic to Minimize the Bottlenecks Presence in Batch Generation. Case of Study.

Author(s):  
Pedro Henoc Ireta-Sánchez ◽  
Elías Gabriel Carrum-Siller ◽  
David Salvador González-González ◽  
Ricardo Martínez-López

Abstract This paper presents a new heuristic method capable of minimizing the presence of bottlenecks generated when production batches have a distinct makespan. The proposed heuristic groups the jobs into items, where the one with the longest processing time in the batch determines the makespan. To test the heuristic, information was collected from a real paint process with two stations: one with a single cabin and the other with two parallel cabins. The capacity of processing jobs is limited by the cabin dimensions where jobs have different sizes and processing times. A makespan comparison between the heuristic proposed versus the First in First out (FIFO) dispatching rule that the case of study uses. Additionally, ten random instances based on data taken from the real process were created with the purpose to compare the new heuristic method versus Genetic Algorithm (GA) and Simulated Annealing (SA). The result of the comparison to FIFO, GA and SA showed that the proposed heuristic minimizes the bottleneck in a and creating batches almost with the same makespan. Results indicated a bottleneck time reduction of 96% when new heuristic method were compared to FIFO rule, while compared to Generic Algorithm and Simulated Annealing the bottleneck reduction were around 89% in both cases.

2013 ◽  
Vol 2013 ◽  
pp. 1-10
Author(s):  
Mohammad Bayat ◽  
Mehdi Heydari ◽  
Mohammad Mahdavi Mazdeh

The deterministic flowshop model is one of the most widely studied problems; whereas its stochastic equivalent has remained a challenge. Furthermore, the preemptive online stochastic flowshop problem has received much less attention, and most of the previous researches have considered a nonpreemptive version. Moreover, little attention has been devoted to the problems where a certain time penalty is incurred when preemption is allowed. This paper examines the preemptive stochastic online flowshop with the objective of minimizing the expected makespan. All the jobs arrive overtime, which means that the existence and the parameters of each job are unknown until its release date. The processing time of the jobs is stochastic and actual processing time is unknown until completion of the job. A heuristic procedure for this problem is presented, which is applicable whenever the job processing times are characterized by their means and standard deviation. The performance of the proposed heuristic method is explored using some numerical examples.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Shang-Chia Liu

The two-stage assembly scheduling problem is widely used in industrial and service industries. This study focuses on the two-stage three-machine flow shop assembly problem mixed with a controllable number and sum-of-processing times-based learning effect, in which the job processing time is considered to be a function of the control of the truncation parameters and learning based on the sum of the processing time. However, the truncation function is very limited in the two-stage flow shop assembly scheduling settings. Thus, this study explores a two-stage three-machine flow shop assembly problem with truncated learning to minimize the makespan criterion. To solve the proposed model, we derive several dominance rules, lemmas, and lower bounds applied in the branch-and-bound method. On the other hand, three simulated annealing algorithms are proposed for finding approximate solutions. In both the small and large size number of job situations, the SA algorithm is better than the JS algorithm in this study. All the experimental results of the proposed algorithm are presented on small and large job sizes, respectively.


2013 ◽  
Vol 325-326 ◽  
pp. 88-93 ◽  
Author(s):  
You Jin Park ◽  
Ha Ran Hwang

This paper focuses on a scheduling problem in photolithography process of semiconductor manufacturing. The photolithography equipment can be divided into three main parts, that is, scanner, spinner, and developer. Generally, in like manner to the other processes, the identical product types are processed at the same time in photolithography process since a certain amount of recipe change time is required whenever product type is changed. So, in this research, we consider multi-product production case with different processing times and flow recipes, and then attempt to reduce total processing time in photolithography process. From this research, we show that the total processing time can be minimized if we give a variety of input orders of lots and wafers.


2010 ◽  
Vol 8 (2) ◽  
pp. 189-192 ◽  
Author(s):  
Crescentiana Dewi Poeloengasih ◽  
Hernawan Hernawan ◽  
M. Angwar

Generally production of chitosan from crustacean shells consists of 4 steps, i.e. deproteinization, demineralization, decolorization and deacetylation. Simplification of chitosan production by elimination of deproteinization and/or demineralization, or reducing of reaction time would give many advantages, e.g. reduction of processing time and cost production due to reduction of chemical and power usage. The objectives of this research were to prepare chitosan under various processing times and to characterize the obtained chitin and chitosan. Chitin was prepared under various deproteinization times (0, 15, 30 min at 90 ºC using NaOH 2N) and demineralization times (0, 15, 30 min at ambient temperature using HCl 2N). Chitin was then bleached using aceton/etanol (1:1) for an hour. Deacetylation was achieved by treatment of chitin under condition at 120 ºC for 5 hr using NaOH 50%. Ash and nitrogen content, and degree of deacetylation of chitosan were evaluated. Demineralization and/or deproteinization times influenced the quality of chitin. Chitin and chitosan prepared without demineralization had white and chalky appearance, whereas the other chitosan were off-white in color. Ash and nitrogen contents of the chitosan products were 0.18 - 32.40% and 3.56 - 7.59%, respectively. Chitosan prepared under various processing times, except chitosan without demineralization treatment, had degree of deacetylation ≥ 70%.   Keywords: chitosan, deproteinization, demineralization, deacetylation, processing times


Author(s):  
Yuri Marchetti Tavares ◽  
Nadia Nedjah ◽  
Luiza de Macedo Mourelle

The template matching is an important technique used in pattern recognition. The goal is to find a given pattern, of a prescribed model, in a frame sequence. In order to evaluate the similarity of two images, the Pearson's Correlation Coefficient (PCC) is used. This coefficient is calculated for each of the image pixels, which entails an operation that is computationally very expensive. In order to improve the processing time, this paper proposes two implementations for template matching: one using Genetic Algorithms (GA) and the other using Particle Swarm Optimization (PSO) considering two different topologies. The results obtained by the proposed methodologies are compared to those obtained by the exhaustive search in each pixel. The comparison indicates that PSO is up to 236x faster than the brute force exhausted search while GA is only 44x faster, for the same image. Also, PSO based methodology is 5x faster than the one based on GA.


1990 ◽  
Vol 27 (4) ◽  
pp. 852-861 ◽  
Author(s):  
Susan H. Xu ◽  
Pitu B. Mirchandani ◽  
Srikanta P. R. Kumar ◽  
Richard R. Weber

A number of multi-priority jobs are to be processed on two heterogeneous processors. Of the jobs waiting in the buffer, jobs with the highest priority have the first option of being dispatched for processing when a processor becomes available. On each processor, the processing times of the jobs within each priority class are stochastic, but have known distributions with decreasing mean residual (remaining) processing times. Processors are heterogeneous in the sense that, for each priority class, one has a lesser average processing time than the other. It is shown that the non-preemptive scheduling strategy for each priority class to minimize its expected flowtime is of threshold type. For each class, the threshold values, which specify when the slower processor is utilized, may be readily computed. It is also shown that the social and the individual optimality coincide.


Author(s):  
Rene Keller ◽  
Claudia M. Eckert ◽  
P. John Clarkson

Managing design freezes plays an important part in today’s competitive markets and successful freeze management can make the difference between delivering a product in time and in budget or failed design projects. On the one hand, design managers want to limit change propagation by freezing components or component-interfaces in order to avoid unwanted change propagation and to meet design lead-times. On the other hand freezing the design of a component too early can constrain innovation and having to unfreeze a design that was frozen before can add further redesign costs. This paper introduces an algorithm for determining an optimal change freeze order (CFO) based on combined change risks and component redesign cost, which has been applied in an automotive company in the design of a diesel engine.


1990 ◽  
Vol 27 (04) ◽  
pp. 852-861 ◽  
Author(s):  
Susan H. Xu ◽  
Pitu B. Mirchandani ◽  
Srikanta P. R. Kumar ◽  
Richard R. Weber

A number of multi-priority jobs are to be processed on two heterogeneous processors. Of the jobs waiting in the buffer, jobs with the highest priority have the first option of being dispatched for processing when a processor becomes available. On each processor, the processing times of the jobs within each priority class are stochastic, but have known distributions with decreasing mean residual (remaining) processing times. Processors are heterogeneous in the sense that, for each priority class, one has a lesser average processing time than the other. It is shown that the non-preemptive scheduling strategy for each priority class to minimize its expected flowtime is of threshold type. For each class, the threshold values, which specify when the slower processor is utilized, may be readily computed. It is also shown that the social and the individual optimality coincide.


1975 ◽  
Vol 26 ◽  
pp. 395-407
Author(s):  
S. Henriksen

The first question to be answered, in seeking coordinate systems for geodynamics, is: what is geodynamics? The answer is, of course, that geodynamics is that part of geophysics which is concerned with movements of the Earth, as opposed to geostatics which is the physics of the stationary Earth. But as far as we know, there is no stationary Earth – epur sic monere. So geodynamics is actually coextensive with geophysics, and coordinate systems suitable for the one should be suitable for the other. At the present time, there are not many coordinate systems, if any, that can be identified with a static Earth. Certainly the only coordinate of aeronomic (atmospheric) interest is the height, and this is usually either as geodynamic height or as pressure. In oceanology, the most important coordinate is depth, and this, like heights in the atmosphere, is expressed as metric depth from mean sea level, as geodynamic depth, or as pressure. Only for the earth do we find “static” systems in use, ana even here there is real question as to whether the systems are dynamic or static. So it would seem that our answer to the question, of what kind, of coordinate systems are we seeking, must be that we are looking for the same systems as are used in geophysics, and these systems are dynamic in nature already – that is, their definition involvestime.


Author(s):  
James C. Long

Over the years, many techniques and products have been developed to reduce the amount of time spent in a darkroom processing electron microscopy negatives and micrographs. One of the latest tools, effective in this effort, is the Mohr/Pro-8 film and rc paper processor.At the time of writing, a unit has been recently installed in the photographic facilities of the Electron Microscopy Center at Texas A&M University. It is being evaluated for use with TEM sheet film, SEM sheet film, 35mm roll film (B&W), and rc paper.Originally designed for use in the phototypesetting industry, this processor has only recently been introduced to the field of electron microscopy.The unit is a tabletop model, approximately 1.5 × 1.5 × 2.0 ft, and uses a roller transport method of processing. It has an adjustable processing time of 2 to 6.5 minutes, dry-to-dry. The installed unit has an extended processing switch, enabling processing times of 8 to 14 minutes to be selected.


Sign in / Sign up

Export Citation Format

Share Document