Integration of Fleet Assignment and Aircraft Routing

Author(s):  
Yihua Li ◽  
Xiubin Wang

Fleet assignment and aircraft routing are two sequential steps in airline capacity planning. The fleet assignment model allots the scheduled flights covered by a type of aircraft on the basis of aircraft availability, and the aircraft routing model generates a route for each particular aircraft to ensure that the path-specific requirements for maintenance and connection times are satisfied. Although it is known that the sequential method is not able to minimize the overall cost, no results have been reported on the integration of the two steps. Here these two steps are completed simultaneously. A path-based integrated model is presented and tested on real data. A heuristic is proposed to solve the formulation. The numerical test indicates that a significant cost saving can be achieved and that the heuristic shows encouraging promise in solving large-scale real-world problems.

2020 ◽  
Vol 20 (2) ◽  
pp. e07
Author(s):  
Luis Veas Castillo ◽  
Gabriel Ovando-Leon ◽  
Gabriel Astudillo ◽  
Veronica Gil-Costa ◽  
Mauricio Marín

Computational simulation is a powerful tool for performance evaluation of computational systems. It is useful to make capacity planning of data center clusters, to obtain profiling reports of software applications and to detect bottlenecks. It has been used in different research areas like large scale Web search engines, natural disaster evacuations, computational biology, human behavior and tendency, among many others. However, properly tuning the parameters of the simulators, defining the scenarios to be simulated and collecting the data traces is not an easy task. It is an incremental process which requires constantly comparing the estimated metrics and the flow of simulated actions against real data. In this work, we present an experimental framework designed for the development of large scale simulations of two applications used upon the occurrence of a natural disaster strikes. The first one is a social application aimed to register volunteers and manage emergency campaigns and tasks. The second one is a benchmark application a data repository named MongoDB. The applications are deployed in a distributed platform which combines different technologies like a Proxy, a Containers Orchestrator, Containers and a NoSQL Database. We simulate both applications and the architecture platform. We validate our simulators using real traces collected during simulacrums of emergency situations.


2007 ◽  
Vol 19 (3) ◽  
pp. 416-428 ◽  
Author(s):  
Ravindra K. Ahuja ◽  
Jon Goodstein ◽  
Amit Mukherjee ◽  
James B. Orlin ◽  
Dushyant Sharma

Genetics ◽  
2003 ◽  
Vol 165 (4) ◽  
pp. 2269-2282
Author(s):  
D Mester ◽  
Y Ronin ◽  
D Minkov ◽  
E Nevo ◽  
A Korol

Abstract This article is devoted to the problem of ordering in linkage groups with many dozens or even hundreds of markers. The ordering problem belongs to the field of discrete optimization on a set of all possible orders, amounting to n!/2 for n loci; hence it is considered an NP-hard problem. Several authors attempted to employ the methods developed in the well-known traveling salesman problem (TSP) for multilocus ordering, using the assumption that for a set of linked loci the true order will be the one that minimizes the total length of the linkage group. A novel, fast, and reliable algorithm developed for the TSP and based on evolution-strategy discrete optimization was applied in this study for multilocus ordering on the basis of pairwise recombination frequencies. The quality of derived maps under various complications (dominant vs. codominant markers, marker misclassification, negative and positive interference, and missing data) was analyzed using simulated data with ∼50-400 markers. High performance of the employed algorithm allows systematic treatment of the problem of verification of the obtained multilocus orders on the basis of computing-intensive bootstrap and/or jackknife approaches for detecting and removing questionable marker scores, thereby stabilizing the resulting maps. Parallel calculation technology can easily be adopted for further acceleration of the proposed algorithm. Real data analysis (on maize chromosome 1 with 230 markers) is provided to illustrate the proposed methodology.


Author(s):  
Andrew Jacobsen ◽  
Matthew Schlegel ◽  
Cameron Linke ◽  
Thomas Degris ◽  
Adam White ◽  
...  

This paper investigates different vector step-size adaptation approaches for non-stationary online, continual prediction problems. Vanilla stochastic gradient descent can be considerably improved by scaling the update with a vector of appropriately chosen step-sizes. Many methods, including AdaGrad, RMSProp, and AMSGrad, keep statistics about the learning process to approximate a second order update—a vector approximation of the inverse Hessian. Another family of approaches use meta-gradient descent to adapt the stepsize parameters to minimize prediction error. These metadescent strategies are promising for non-stationary problems, but have not been as extensively explored as quasi-second order methods. We first derive a general, incremental metadescent algorithm, called AdaGain, designed to be applicable to a much broader range of algorithms, including those with semi-gradient updates or even those with accelerations, such as RMSProp. We provide an empirical comparison of methods from both families. We conclude that methods from both families can perform well, but in non-stationary prediction problems the meta-descent methods exhibit advantages. Our method is particularly robust across several prediction problems, and is competitive with the state-of-the-art method on a large-scale, time-series prediction problem on real data from a mobile robot.


2021 ◽  
pp. 105551
Author(s):  
Mohamed Ben Ahmed ◽  
Maryia Hryhoryeva ◽  
Lars Magnus Hvattum ◽  
Mohamed Haouari

Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Yiwen Zhang ◽  
Yuanyuan Zhou ◽  
Xing Guo ◽  
Jintao Wu ◽  
Qiang He ◽  
...  

The K-means algorithm is one of the ten classic algorithms in the area of data mining and has been studied by researchers in numerous fields for a long time. However, the value of the clustering number k in the K-means algorithm is not always easy to be determined, and the selection of the initial centers is vulnerable to outliers. This paper proposes an improved K-means clustering algorithm called the covering K-means algorithm (C-K-means). The C-K-means algorithm can not only acquire efficient and accurate clustering results but also self-adaptively provide a reasonable numbers of clusters based on the data features. It includes two phases: the initialization of the covering algorithm (CA) and the Lloyd iteration of the K-means. The first phase executes the CA. CA self-organizes and recognizes the number of clusters k based on the similarities in the data, and it requires neither the number of clusters to be prespecified nor the initial centers to be manually selected. Therefore, it has a “blind” feature, that is, k is not preselected. The second phase performs the Lloyd iteration based on the results of the first phase. The C-K-means algorithm combines the advantages of CA and K-means. Experiments are carried out on the Spark platform, and the results verify the good scalability of the C-K-means algorithm. This algorithm can effectively solve the problem of large-scale data clustering. Extensive experiments on real data sets show that the accuracy and efficiency of the C-K-means algorithm outperforms the existing algorithms under both sequential and parallel conditions.


2017 ◽  
Vol 10 (1) ◽  
pp. 1-21 ◽  
Author(s):  
Ahmad Tavassoli ◽  
Mahmoud Mesbah ◽  
Mark Hickman

Sign in / Sign up

Export Citation Format

Share Document