scholarly journals TPA: A Two-Phase Approach Using Simulated Annealing for the Optimization of Census Taker Routes in Mexico

2015 ◽  
Vol 2015 ◽  
pp. 1-9
Author(s):  
Silvia Gaona ◽  
David Romero

Censuses in Mexico are taken by the National Institute of Statistics and Geography (INEGI). In this paper a Two-Phase Approach (TPA) to optimize the routes of INEGI’s census takers is presented. For each pollster, in the first phase, a route is produced by means of the Simulated Annealing (SA) heuristic, which attempts to minimize the travel distance subject to particular constraints. Whenever the route is unrealizable, it is made realizable in the second phase by constructing a visibility graph for each obstacle and applying Dijkstra’s algorithm to determine the shortest path in this graph. A tuning methodology based on theiracepackage was used to determine the parameter values for TPA on a subset of 150 instances provided by INEGI. The practical effectiveness of TPA was assessed on another subset of 1962 instances, comparing its performance with that of the in-use heuristic (INEGIH). The results show that TPA clearly outperformsINEGIH. The average improvement is of 47.11%.

Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-18 ◽  
Author(s):  
Jessica L. Chapman ◽  
Lu Lu ◽  
Christine M. Anderson-Cook

An important aspect of good management of inventory for many single-use populations or stockpiles is to develop an informed consumption strategy to use a collection of single-use units, with varied reliability as a function of age, during scheduled operations. We present a two-phase approach to balance multiple objectives for a consumption strategy to ensure good performance on the average reliability, consistency of unit reliability over time, and least uncertainty of the reliability estimates. In the first phase, a representative subset of units is selected to explore the impact of using units at different time points on reliability performance and to identify beneficial consumption patterns using a nondominated sorting genetic algorithm based on multiple objectives. In the second phase, the results from the first phase are projected back to the full stockpile as a starting point for determining best consumption strategies that emphasize the priorities of the manager. The method can be generalized to other criteria of interest and management optimization strategies. The method is illustrated with an example that shares characteristics with some munition stockpiles and demonstrates the substantial advantages of the two-phase approach on the quality of solutions and efficiency of finding them.


2020 ◽  
Author(s):  
Eftychia Koursari ◽  
Stuart Wallace ◽  
Panagiotis Michalis ◽  
Yi Xu ◽  
Manousos Valyrakis

<p>Scour is the leading cause of bridge collapse worldwide, being responsible for compromising the stability of structures’ foundations. Scour and erosion can take place without prior warning and cause sudden failure. This study describes engineering measures and complications encountered during construction for a case study in the Scottish Borders (A68 Galadean Bridge). The bridge studied carries the A68 road across the Leader Water.</p><p>Transport Scotland’s structures crossing or near a watercourse are subject to a two-stage scour assessment following the Design Manual for Roads and Bridges (DMRB) BD97/12 Standard, ‘The Assessment of Scour and Other Hydraulic Actions at Highway Structures’. Structures identified at risk are monitored through Reactive Structures Safety Inspections following events likely to increase water levels. The most common form of monitoring includes visual inspections, however, monitoring sensors are being currently implemented and trialled at locations at high risk of scour.</p><p>Scour in the area was identified during a Reactive Structures Safety Inspection, following which a weekly scour monitoring regime was established, alongside further Reactive Structures Safety Inspections, until remediation measures were put in place.</p><p>Despite the bridge being constructed perpendicular to the Leader Water, meandering of the watercourse was detected upstream. Sediment transport was the cause of an island formation immediately upstream of the structure. Non-uniform flow and secondary, spiral currents, resulting from the formation of the bend were exacerbating scour and erosion in the area. The design of the remediation measures included the implementation of rock rolls alongside the affected riverbank. However, during construction, increased water levels resulting from thawing snow resulted in the collapse of a significant portion of the embankment supporting the structure’s abutment and the A68 road, prior to the realisation of the remediation measures. An emergency design revision was required and emergency measures had to be enforced.</p><p>The urgency of the works led to a two-phase approach being followed for the design and construction of the scour measures in the affected area. The first phase included the construction of a platform in front of the affected road embankment and the implementation of rock rolls to provide scour protection. The two-phase approach ensured the infrastructure at risk was protected from further deterioration while the reconstruction of the embankment was being designed.</p><p>The second phase of works included the reconstruction of the affected road embankment, for which the anticipated total scour depth was taken into account.</p><p> </p><p>References:</p><p>Koursari E and Wallace S. 2019. Infrastructure scour management: a case study for A68 Galadean Bridge, UK. Proceedings of the Institution of Civil Engineers – Bridge Engineering, https://doi.org/10.1680/jbren.18.00062</p><p> </p><p>Acknowledgements:</p><p>The authors would like to acknowledge Transport Scotland for funding this project.</p>


Author(s):  
Yun Fong Lim ◽  
Song Jiu ◽  
Marcus Ang

Problem definition: In each period of a planning horizon, an online retailer decides how much to replenish each product and how to allocate its inventory to fulfillment centers (FCs) before demand is known. After the demand in the period is realized, the retailer decides on which FCs to fulfill it. It is crucial to optimize the replenishment, allocation, and fulfillment decisions jointly such that the expected total operating cost is minimized. The problem is challenging because the replenishment allocation is done in an anticipative manner under a push strategy, but the fulfillment is executed in a reactive way under a pull strategy. We propose a multiperiod stochastic optimization model to delicately integrate the anticipative replenishment allocation decisions with the reactive fulfillment decisions such that they are determined seamlessly as the demands are realized over time. Academic/practical relevance: The aggressive expansion in e-commerce sales significantly escalates online retailers’ operating costs. Our methodology helps boost their competency in this cutthroat industry. Methodology: We develop a two-phase approach based on robust optimization to solve the problem. The first phase decides whether the products should be replenished in each period (binary decisions). We fix these binary decisions in the second phase, in which we determine the replenishment, allocation, and fulfillment quantities. Results: Numerical experiments suggest that our approach outperforms existing methods from the literature in solution quality and computational time and performs within 7% of a benchmark with perfect information. A study using real data from a major fashion online retailer in Asia suggests that the two-phase approach can potentially reduce the retailer’s cumulative cost significantly. Managerial implications: By decoupling the binary decisions from the continuous decisions, our methodology can solve large problem instances (up to 1,200 products). The integration, robustness, and adaptability of the decisions under our approach create significant value.


2020 ◽  
Author(s):  
Teresa Rexin ◽  
Mason A. Porter

Traveling to different destinations is a big part of our lives. How do we know the best way to navigate from one place to another? Perhaps we could test all of the different ways of traveling between two places, but another method is using mathematics and computation to find a shortest path. We discuss how to find a shortest path and introduce Dijkstra’s algorithm to minimize the total cost of a path, where the cost may be the travel distance or travel time. We also discuss how shortest paths can be used in the real world to save time and increase traveling efficiency.


10.29007/sgpl ◽  
2018 ◽  
Author(s):  
Marijn Heule ◽  
Armin Biere

Although clausal propositional proofs are significantly smaller comparedto resolution proofs, their size is still too large for severalapplications. In this paper we present several methods to compressclausal proofs. These methods are based on a two phase approach. Thefirst phase consists of a light-weight compression algorithm that caneasily be added to satisfiability solvers that support the emissionof clausal proofs. In the second phase, we propose to use a powerfuloff-the-shelf general-purpose compression tool, such as bzip2 and7z. Sorting literals before compression facilitates a delta encoding,which combined with variable-byte encoding improves the quality of thecompression. We show that clausal proofs can be compressed by one orderof magnitude by applying both phases.


Algorithms ◽  
2020 ◽  
Vol 13 (9) ◽  
pp. 215
Author(s):  
Amit Saxena ◽  
Shreya Pare ◽  
Mahendra Singh Meena ◽  
Deepak Gupta ◽  
Akshansh Gupta ◽  
...  

This paper proposes a novel approach for selecting a subset of features in semi-supervised datasets where only some of the patterns are labeled. The whole process is completed in two phases. In the first phase, i.e., Phase-I, the whole dataset is divided into two parts: The first part, which contains labeled patterns, and the second part, which contains unlabeled patterns. In the first part, a small number of features are identified using well-known maximum relevance (from first part) and minimum redundancy (whole dataset) based feature selection approaches using the correlation coefficient. The subset of features from the identified set of features, which produces a high classification accuracy using any supervised classifier from labeled patterns, is selected for later processing. In the second phase, i.e., Phase-II, the patterns belonging to the first and second part are clustered separately into the available number of classes of the dataset. In the clusters of the first part, take the majority of patterns belonging to a cluster as the class for that cluster, which is given already. Form the pairs of cluster centroids made in the first and second part. The centroid of the second part nearest to a centroid of the first part will be paired. As the class of the first centroid is known, the same class can be assigned to the centroid of the cluster of the second part, which is unknown. The actual class of the patterns if known for the second part of the dataset can be used to test the classification accuracy of patterns in the second part. The proposed two-phase approach performs well in terms of classification accuracy and number of features selected on the given benchmarked datasets.


Algorithms ◽  
2019 ◽  
Vol 12 (4) ◽  
pp. 67 ◽  
Author(s):  
Rommel Dias Saraiva ◽  
Napoleão Nepomuceno ◽  
Plácido Rogério Pinheiro

We propose in this paper a two-phase approach that decomposes the process of solving the three-dimensional single Container Loading Problem (CLP) into subsequent tasks: (i) the generation of blocks of boxes and (ii) the loading of blocks into the container. The first phase is deterministic, and it is performed by means of constructive algorithms from the literature. The second phase is non-deterministic, and it is performed with the use of Generate-and-Solve (GS), a problem-independent hybrid optimization framework based on problem instance reduction that combines a metaheuristic with an exact solver. Computational experiments performed on benchmark instances indicate that our approach presents competitive results compared to those found by state-of-the-art algorithms, particularly for problem instances consisting of a few types of boxes. In fact, we present new best solutions for classical instances from groups BR1 and BR2.


Author(s):  
Arjan Akkermans ◽  
Gerhard Post ◽  
Marc Uetz

AbstractIn this paper we propose a two-phase approach to solve the shift and break design problem using integer linear programming. In the first phase we create the shifts, while heuristically taking the breaks into account. In the second phase we assign breaks to each occurrence of any shift, one by one, repeating this until no improvement is found. On a set of benchmark instances, composed by both randomly-generated and real-life ones, this approach obtains better results than the current best known method for shift and break design problem.


d'CARTESIAN ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 158
Author(s):  
Yohana Permata Hutapea ◽  
Chriestie E.J.C. Montolalu ◽  
Hanny A.H. Komalig

Manado city has many notable tourist sites, resulting in the increase of the number of tourists visiting every year. Tourists require hotels with adequate facilities for their stay, such as 4-star hotels. After visiting Manado, tourists go back to where they come from. One of the transportation mode being used is airplanes. They then need a path to go through and not the usual one; they need the shortest path to get to Sam Ratulangi airport. Based on previous research, the shortest path is modeled by Graph Theory. Hotels will be represented as vertices, and the path from each hotels and to the airport will be represented as edges. The shortest path are searched by using Dijkstra’s Algorithm then will see the difference to shortest path from google maps. Based on the analysis results, Dijkstra’s Algorithm selects the shortest path with the smallest weight. The difference between Dijkstra’s Algorithm and google maps can be concluded that, in determining the shortest path used for the trip from the 4-star hotel to the airport, Dijkstra’s Algorithm is emphasized towards short travel distance, whereas google maps is emphasized more in short travel time.


Author(s):  
M.G. Burke ◽  
M.K. Miller

Interpretation of fine-scale microstructures containing high volume fractions of second phase is complex. In particular, microstructures developed through decomposition within low temperature miscibility gaps may be extremely fine. This paper compares the morphological interpretations of such complex microstructures by the high-resolution techniques of TEM and atom probe field-ion microscopy (APFIM).The Fe-25 at% Be alloy selected for this study was aged within the low temperature miscibility gap to form a <100> aligned two-phase microstructure. This triaxially modulated microstructure is composed of an Fe-rich ferrite phase and a B2-ordered Be-enriched phase. The microstructural characterization through conventional bright-field TEM is inadequate because of the many contributions to image contrast. The ordering reaction which accompanies spinodal decomposition in this alloy permits simplification of the image by the use of the centered dark field technique to image just one phase. A CDF image formed with a B2 superlattice reflection is shown in fig. 1. In this CDF micrograph, the the B2-ordered Be-enriched phase appears as bright regions in the darkly-imaging ferrite. By examining the specimen in a [001] orientation, the <100> nature of the modulations is evident.


Sign in / Sign up

Export Citation Format

Share Document