scholarly journals Task scheduling for transport and pick robots in logistics: a comparative study on constructive heuristics

2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Hanfu Wang ◽  
Weidong Chen

AbstractWe study the Transport and Pick Robots Task Scheduling (TPS) problem, in which two teams of specialized robots, transport robots and pick robots, collaborate to execute multi-station order fulfillment tasks in logistic environments. The objective is to plan a collective time-extended task schedule with the minimization of makespan. However, for this recently formulated problem, it is still unclear how to obtain satisfying results efficiently. In this research, we design several constructive heuristics to solve this problem based on the introduced sequence models. Theoretically, we give time complexity analysis or feasibility guarantees of these heuristics; empirically, we evaluate the makespan performance criteria and computation time on designed dataset. Computational results demonstrate that coupled append heuristic works better for the most cases within reasonable computation time. Coupled heuristics work better than decoupled heuristics prominently on instances with relative few pick robot numbers and large work zones. The law of diminishing marginal utility is also observed concerning the overall system performance and different transport-pick robot numbers.

2014 ◽  
Vol 2014 ◽  
pp. 1-19 ◽  
Author(s):  
Byoung-Il Kim ◽  
Jin Hong

Cryptanalytic time memory tradeoff algorithms are tools for inverting one-way functions, and they are used in practice to recover passwords that restrict access to digital documents. This work provides an accurate complexity analysis of the perfect table fuzzy rainbow tradeoff algorithm. Based on the analysis results, we show that the lesser known fuzzy rainbow tradeoff performs better than the original rainbow tradeoff, which is widely believed to be the best tradeoff algorithm. The fuzzy rainbow tradeoff can attain higher online efficiency than the rainbow tradeoff and do so at a lower precomputation cost.


Author(s):  
Victer Paul ◽  
Ganeshkumar C ◽  
Jayakumar L

Genetic algorithms (GAs) are a population-based meta-heuristic global optimization technique for dealing with complex problems with a very large search space. The population initialization is a crucial task in GAs because it plays a vital role in the convergence speed, problem search space exploration, and also the quality of the final optimal solution. Though the importance of deciding problem-specific population initialization in GA is widely recognized, it is hardly addressed in the literature. In this article, different population seeding techniques for permutation-coded genetic algorithms such as random, nearest neighbor (NN), gene bank (GB), sorted population (SP), and selective initialization (SI), along with three newly proposed ordered-distance-vector-based initialization techniques have been extensively studied. The ability of each population seeding technique has been examined in terms of a set of performance criteria, such as computation time, convergence rate, error rate, average convergence, convergence diversity, nearest-neighbor ratio, average distinct solutions and distribution of individuals. One of the famous combinatorial hard problems of the traveling salesman problem (TSP) is being chosen as the testbed and the experiments are performed on large-sized benchmark TSP instances obtained from standard TSPLIB. The scope of the experiments in this article is limited to the initialization phase of the GA and this restricted scope helps to assess the performance of the population seeding techniques in their intended phase alone. The experimentation analyses are carried out using statistical tools to claim the unique performance characteristic of each population seeding techniques and best performing techniques are identified based on the assessment criteria defined and the nature of the application.


2014 ◽  
Vol 2014 ◽  
pp. 1-14 ◽  
Author(s):  
Hui Lu ◽  
Zheng Zhu ◽  
Xiaoteng Wang ◽  
Lijuan Yin

Test task scheduling problem (TTSP) is a typical combinational optimization scheduling problem. This paper proposes a variable neighborhood MOEA/D (VNM) to solve the multiobjective TTSP. Two minimization objectives, the maximal completion time (makespan) and the mean workload, are considered together. In order to make solutions obtained more close to the real Pareto Front, variable neighborhood strategy is adopted. Variable neighborhood approach is proposed to render the crossover span reasonable. Additionally, because the search space of the TTSP is so large that many duplicate solutions and local optima will exist, the Starting Mutation is applied to prevent solutions from becoming trapped in local optima. It is proved that the solutions got by VNM can converge to the global optimum by using Markov Chain and Transition Matrix, respectively. The experiments of comparisons of VNM, MOEA/D, and CNSGA (chaotic nondominated sorting genetic algorithm) indicate that VNM performs better than the MOEA/D and the CNSGA in solving the TTSP. The results demonstrate that proposed algorithm VNM is an efficient approach to solve the multiobjective TTSP.


Author(s):  
Mojdeh Asadollahi Pajouh ◽  
Robert W. Bielenberg ◽  
John D. Reid ◽  
Jennifer D. Schmidt ◽  
Ronald K. Faller ◽  
...  

Portable concrete barriers (PCBs) are often used in applications in which limited deflection is desired during vehicle impacts, such as bridge decks and work zones. In an earlier study, a reduced-deflection, stiffening system was configured for use with non-anchored, F-shape PCBs and was successfully crash tested under Manual for Assessing Safety Hardware (MASH) safety performance criteria. However, details and guidance for implementing this barrier system outside the length-of-need, including within transitions to other barrier systems, were not provided. The focus of this study was to develop a crashworthy transition design between the reduced-deflection, F-shape PCB system to free-standing, F-shape PCB segments using engineering analysis and LS-DYNA computer simulation. First, the continuous steel tubes in the reduced-deflection system were tapered down to the surface of the free-standing PCB segments to reduce the potential for vehicle snag. In addition, steel tube spacers were added at the base of the two joints upstream from the reduced-deflection system to increase the stiffness of adjacent free-standing PCBs. Simulations were performed to determine the critical impact points for use in a full-scale crash testing program. It was recommended that three full-scale crash tests be conducted, two tests with a 2270P pickup truck vehicle and one test with an 1100C passenger car, to evaluate the proposed design system with impacts at the recommended critical impact points.


2014 ◽  
Vol 1030-1032 ◽  
pp. 1671-1675
Author(s):  
Yue Qiu ◽  
Jing Feng Zang

This paper puts forward an improved genetic scheduling algorithm in order to improve the execution efficiency of task scheduling of the heterogeneous multi-core processor system and give full play to its performance. The attribute values and the high value of tasks were introduced to structure the initial population, randomly selected a method with the 50% probability to sort for task of individuals of the population, thus to get high quality initial population and ensured the diversity of the population. The experimental results have shown that the performance of the improved algorithm was better than that of the traditional genetic algorithm and the HEFT algorithm. The execution time of tasks was reduced.


1966 ◽  
Vol 44 (24) ◽  
pp. 3031-3050 ◽  
Author(s):  
J. Pitha ◽  
R. Norman Jones

A comparison has been made of seven numerical methods of fitting infrared absorption band envelopes with analytical functions using nonlinear least squares approximations. Gauss and Cauchy (Lorentz) band shape functions are used, and also sum and product combinations of the two. The methods have been compared with respect to both the degree of convergence and to the computation time needed to achieve an acceptable fit.The most effective method has matched the overlap envelope of a steroid spectrum containing 16 bands; this necessitated the optimization of 65 variables. More complex spectra can be dealt with by a "moving subspace" modification in which only the parameters of a group of adjacent bands are adjusted at one time. Automatic computer programs have been written for five of the methods, and for the moving subspace modification. These will be published elsewhere.If the computed curve is convoluted with the spectral slit function before making the least squares calculations, the distortion of the observed spectrum caused by the finite spectral slit width can be corrected. In some cases this method of diminishing the slit distortion is better than direct methods, particularly when dealing with strongly overlapped bands.


2012 ◽  
Vol 6-7 ◽  
pp. 717-721 ◽  
Author(s):  
Zhao Yang Zeng ◽  
Zhi Qiang Jiang ◽  
Qiang Chen ◽  
Pan Feng He

In order to accurately extract corners from the image with high texture complexity, the paper analyzed the traditional corner detection algorithm based on gray value of image. Although Harris corner detection algorithm has higher accuracy, but there also exists the following problems: extracting false corners, the information of the corners is missing and computation time is a bit long. So an improved corner detection algorithm combined Harris with SUSAN corner detection algorithm is proposed, the new algorithm first use the Harris to detect corners of image, then use the SUSAN to eliminate the false corners. By comparing the test results show that the new algorithm to extract corners very effective, and better than the Harris algorithm in the performance of corner detection.


2017 ◽  
Vol 2017 ◽  
pp. 1-8
Author(s):  
Marko Sonkki ◽  
Sami Myllymäki ◽  
Jussi Putaala ◽  
Eero Heikkinen ◽  
Tomi Haapala ◽  
...  

The paper presents a novel dual polarized dual fed Vivaldi antenna structure for 1.7–2.7 GHz cellular bands. The radiating element is designed for a base station antenna array with high antenna performance criteria. One radiating element contains two parallel dual fed Vivaldi antennas for one polarization with 65 mm separation. Both Vivaldi antennas for one polarization are excited symmetrically. This means that the amplitudes for both antennas are equal, and the phase difference is zero. The orthogonal polarization is implemented in the same way. The dual polarized dual fed Vivaldi is positioned 15 mm ahead from the reflector to improve directivity. The antenna is designed for -14 dB impedance bandwidth (1.7–2.7 GHz) with better than 25 dB isolation between the antenna ports. The measured total efficiency is better than -0.625 dB (87%) and the antenna presents a flat, approximately 8.5 dB, gain in the direction of boresight over the operating bandwidth whose characteristics promote it among the best antennas in the field. Additionally, the measured cross polarization discrimination (XPD) is between 15 and 30 dB and the 3 dB beamwidth varies between 68° and 75° depending on the studied frequency.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Redwan A. Al-dilami ◽  
Ammar T. Zahary ◽  
Adnan Z. Al-Saqqaf

Issues of task scheduling in the centre of cloud computing are becoming more important, and the cost is one of the most important parameters used for scheduling tasks. This study aims to investigate the problem of online task scheduling of the identified job of MapReduce on cloud computing infrastructure. It was proposed that the virtualized cloud computing setup comprised machines that host multiple identical virtual machines (VMs) that need to be activated earlier and run continuously, and booting a VM requires a constant setup time. A VM that remains running even though it is no longer used is considered an idle VM. Furthermore, this study aims to distribute the idle cost of the VMs rather than the cost of setting up them among tasks in a fair manner. This study also is an extension of previous studies which solved the problems that occurred when distributing the idle cost and setting up the cost of VMs among tasks. It classifies the tasks into three groups (long, mid, and short) and distributes the idle cost among the groups then among the tasks of the groups. The main contribution of this paper is the developing of a clairvoyant algorithm that addressed important factors such as the delay and the cost that occurred by waiting to setup VM (active VM). Also, when the VMs are run continually and some VMs become in idle state, the idle cost will be distributed among the current tasks in a fair manner. The results of this study, in comparison with previous studies, showed that the idle cost and the setup cost that was distributed among tasks were better than the idle cost and the setup cost distributed in those studies.


Nowadays, with the huge development of information and computing technologies, the cloud computing is becoming the highly scalable and widely computing technology used in the world that bases on pay-per-use, remotely access, Internet-based and on-demand concepts in which providing customers with a shared of configurable resources. But, with the highly incoming user’s requests, the task scheduling and resource allocation are becoming major requirements for efficient and effective load balancing of a workload among cloud resources to enhance the overall cloud system performance. For these reasons, various types of task scheduling algorithms are introduced such as traditional, heuristic, and meta-heuristic. A heuristic task scheduling algorithms like MET, MCT, Min-Min, and Max-Min are playing an important role for solving the task scheduling problem. This paper proposes a new hybrid algorithm in cloud computing environment that based on two heuristic algorithms; Min-Min and Max-Min algorithms. To evaluate this algorithm, the Cloudsim simulator has been used with different optimization parameters; makespan, average of resource utilization, load balancing, average of waiting time and concurrent execution between small length tasks and long size tasks. The results show that the proposed algorithm is better than the two algorithms Min-Min and Max-Min for those parameters


Sign in / Sign up

Export Citation Format

Share Document