scholarly journals Computing Large Market Equilibria Using Abstractions

2021 ◽  
Author(s):  
Christian Kroer ◽  
Alexander Peysakhovich ◽  
Eric Sodomka ◽  
Nicolas E. Stier-Moses

Computing market equilibria is an important practical problem for market design, for example, in fair division of items. However, computing equilibria requires large amounts of information, often the valuation of every buyer for every item, and computing power. In “Computing Large Market Equilibria Using Abstractions,” the authors study abstraction methods for ameliorating these issues. The basic abstraction idea is as follows. First, construct a coarsened abstraction of a given market, then solve for the equilibrium in the abstraction, and finally, lift the prices and allocations back to the original market. The authors show theoretical guarantees on the solution quality obtained via this approach. Then, two abstraction methods of interest for practitioners are introduced: (1) filling in unknown valuations using techniques from matrix completion and (2) reducing the problem size by aggregating groups of buyers/items into smaller numbers of representative buyers/items and solving for equilibrium in this coarsened market.

2020 ◽  
Vol 61 (5) ◽  
pp. 1977-1999
Author(s):  
H. Fairclough ◽  
M. Gilbert

AbstractTraditional truss layout optimization employing the ground structure method will often generate layouts that are too complex to fabricate in practice. To address this, mixed integer linear programming can be used to enforce buildability constraints, leading to simplified truss forms. Limits on the number of joints in the structure and/or the minimum angle between connected members can be imposed, with the joints arising from crossover of pairs of members accounted for. However, in layout optimization, the number of constraints arising from ‘crossover joints’ increases rapidly with problem size, along with computational expense. To address this, crossover constraints are here dynamically generated and added at runtime only as required (so-called lazy constraints); speedups of more than 20 times are observed whilst ensuring that there is no loss of solution quality. Also, results from the layout optimization step are shown to provide a suitable starting point for a non-linear geometry optimization step, enabling results to be obtained that are in agreement with literature solutions. It is also shown that symmetric problems may not have symmetric optimal solutions, and that multiple distinct and equally optimal solutions may be found.


2020 ◽  
Vol 54 (2) ◽  
pp. 307-323
Author(s):  
Wen-Chiung Lee ◽  
Jen-Ya Wang

This study introduces a two-machine three-agent scheduling problem. We aim to minimize the total tardiness of jobs from agent 1 subject to that the maximum completion time of jobs from agent 2 cannot exceed a given limit and that two maintenance activities from agent 3 must be conducted within two maintenance windows. Due to the NP-hardness of this problem, a genetic algorithm (named GA+) is proposed to obtain approximate solutions. On the other hand, a branch-and-bound algorithm (named B&B) is developed to generate the optimal solutions. When the problem size is small, we use B&B to verify the solution quality of GA+. When the number of jobs is large, a relative deviation is proposed to show the gap between GA+ and another ordinary genetic algorithm. Experimental results show that the proposed genetic algorithm can generate approximate solutions by consuming reasonable execution time.


2016 ◽  
Vol 120 (1223) ◽  
pp. 209-232 ◽  
Author(s):  
P. R. Spalart ◽  
V. Venkatakrishnan

ABSTRACTThis article examines the increasingly crucial role played by Computational Fluid Dynamics (CFD) in the analysis, design, certification, and support of aerospace products. The status of CFD is described, and we identify opportunities for CFD to have a more substantial impact. The challenges facing CFD are also discussed, primarily in terms of numerical solution, computing power, and physical modelling. We believe the community must find a balance between enthusiasm and rigor. Besides becoming faster and more affordable by exploiting higher computing power, CFD needs to become more reliable, more reproducible across users, and better understood and integrated with other disciplines and engineering processes. Uncertainty quantification is universally considered as a major goal, but will be slow to take hold. The prospects are good for steady problems with Reynolds-Averaged Navier-Stokes (RANS) turbulence modelling to be solved accurately and without user intervention within a decade – even for very complex geometries, provided technologies, such as solution adaptation are matured for large three-dimensional problems. On the other hand, current projections for supercomputers show a future rate of growth only half of the rate enjoyed from the 1990s to 2013; true exaflop performance is not close. This will delay pure Large-Eddy Simulation (LES) for aerospace applications with their high Reynolds numbers, but hybrid RANS-LES approaches have great potential. Our expectations for a breakthrough in turbulence, whether within traditional modelling or LES, are low and as a result off-design flow physics including separation will continue to pose a substantial challenge, as will laminar-turbulent transition. We also advocate for much improved user interfaces, providing instant access to rich numerical and physical information as well as warnings over solution quality, and thus naturally training the user.


2005 ◽  
Vol 35 (10) ◽  
pp. 2500-2509 ◽  
Author(s):  
Kevin A Crowe ◽  
J D Nelson

A common approach for incorporating opening constraints into harvest scheduling is through the area-restricted model. This model is used to select which stands to include in each opening while simultaneously determining an optimal harvest schedule over multiple time periods. In this paper we use optimal benchmarks from a range of harvest scheduling problem instances to test a metaheuristic algorithm, simulated annealing, that is commonly used to solve these problems. Performance of the simulated annealing algorithm was assessed over a range of problem attributes such as the number of forest polygons, age-class distribution, and opening size. In total, 29 problem instances were used, ranging in size from 1269 to 36 270 binary decision variables. Overall, the mean objective function values found with simulated annealing ranged from approximately 87% to 99% of the optima after 30 min of computing time, and a moderate downward trend of the relationship between problem size and solution quality was observed.


Author(s):  
Christian Kroer ◽  
Alexander Peysakhovich ◽  
Eric Sodomka ◽  
Nicolas E. Stier-Moses

2017 ◽  
Vol 34 (1) ◽  
pp. 145-163 ◽  
Author(s):  
Peng-Sheng You ◽  
Pei-Ju Lee ◽  
Yi-Chih Hsieh

Purpose Many bike rental organizations permit customers to pick-up bikes from one bike station and return them at a different one. However, this service may result in bike imbalance, as bikes may accumulate in stations with low demand. To overcome the imbalance problem, this paper aims to develop a decision model to minimize the total costs of unmet demand and empty bike transport by determining bike fleet size, deployments and the vehicle routing schedule for bike transports. Design/methodology/approach This paper developed a constrained mixed-integer programming model to deal with this bike imbalance problem. The proposed model belongs to the non-deterministic polynomial-time (NP)-hard problem. This paper developed a two-phase heuristic approach to solve the model. In Phase 1, the approach determines fleet size, deployment level and the number of satisfied demands. In Phase 2, the approach determines the routing schedule for bike transfers. Findings Computational results show the following results that the proposed approach performs better than General Algebraic Modeling System (GAMS) in terms of solution quality, regardless of problem size. The objective values and the fleet size of rental bikes allocated increase as the number of rental stations increases. The cost of transportation is not directly proportional to the number of bike stations. Originality/value The authors provide an integrated model to simultaneously deal with the problems of fleet sizing, empty-resource repositioning and vehicle routing for bike transfer in multiple-station systems, and they also present an algorithm that can be applied to large-scale problems which cannot be solved by the well-known commercial software, GAMS/CPLEX.


Author(s):  
Kaike Zhang ◽  
Xueping Li ◽  
Mingzhou Jin

This study generalizes the r-interdiction median (RIM) problem with fortification to simultaneously consider two types of risks: probabilistic exogenous disruptions and endogenous disruptions caused by intentional attacks. We develop a bilevel programming model that includes a lower-level interdiction problem and a higher-level fortification problem to hedge against such risks. We then prove that the interdiction problem is supermodular and subsequently adopt the cuts associated with supermodularity to develop an efficient cutting-plane algorithm to achieve exact solutions. For the fortification problem, we adopt the logic-based Benders decomposition (LBBD) framework to take advantage of the two-level structure and the property that a facility should not be fortified if it is not attacked at the lower level. Numerical experiments show that the cutting-plane algorithm is more efficient than benchmark methods in the literature, especially when the problem size grows. Specifically, with regard to the solution quality, LBBD outperforms the greedy algorithm in the literature with an up-to 13.2% improvement in the total cost, and it is as good as or better than the tree-search implicit enumeration method. Summary of Contribution: This paper studies an r-interdiction median problem with fortification (RIMF) in a supply chain network that simultaneously considers two types of disruption risks: random disruptions that occur probabilistically and disruptions caused by intentional attacks. The problem is to determine the allocation of limited facility fortification resources to an existing network. It is modeled as a bilevel programming model combining a defender’s problem and an attacker’s problem, which generalizes the r-interdiction median problem with probabilistic fortification. This paper is suitable for IJOC in mainly two aspects: (1) The lower-level attacker’s interdiction problem is a challenging high-degree nonlinear model. In the literature, only a total enumeration method has been applied to solve a special case of this problem. By exploring the special structural property of the problem, namely, the supermodularity of the transportation cost function, we developed an exact cutting-plane method to solve the problem to its optimality. Extensive numerical studies were conducted. Hence, this paper fits in the intersection of operations research and computing. (2) We developed an efficient logic-based Benders decomposition algorithm to solve the higher-level defender’s fortification problem. Overall, this study generalizes several important problems in the literature, such as RIM, RIMF, and RIMF with probabilistic fortification (RIMF-p).


2018 ◽  
Vol 2018 ◽  
pp. 1-17 ◽  
Author(s):  
Jiage Huo ◽  
Zhengxu Wang ◽  
Felix T. S. Chan ◽  
Carman K. M. Lee ◽  
Jan Ola Strandhagen

We use a hybrid approach which executes ant colony algorithm in combination with beam search (ACO-BS) to solve the Simple Assembly Line Balancing Problem (SALBP). The objective is to minimise the number of workstations for a given fixed cycle time, in order to improve the solution quality and speed up the searching process. The results of 269 benchmark instances show that 95.54% of the problems can reach their optimal solutions within 360 CPU time seconds. In addition, we choose order strength and time variability as indicators to measure the complexity of the SALBP instances and then generate 27 instances with a total of 400 tasks (the problem size being much larger than that of the largest benchmark instance) randomly, with the order strength at 0.2, 0.6 and 0.9 three levels and the time variability at 5-15, 65-75, and 135-145 levels. However, the processing times are generated following a unimodal or a bimodal distribution. The comparison results with solutions obtained by priority rule show that ACO-BS makes significant improvements on the quality of the best solutions.


2020 ◽  
Vol 34 (02) ◽  
pp. 2192-2199
Author(s):  
Riley Murray ◽  
Christian Kroer ◽  
Alex Peysakhovich ◽  
Parikshit Shah

The problem of allocating scarce items to individuals is an important practical question in market design. An increasingly popular set of mechanisms for this task uses the concept of market equilibrium: individuals report their preferences, have a budget of real or fake currency, and a set of prices for items and allocations is computed that sets demand equal to supply. An important real world issue with such mechanisms is that individual valuations are often only imperfectly known. In this paper, we show how concepts from classical market equilibrium can be extended to reflect such uncertainty. We show that in linear, divisible Fisher markets a robust market equilibrium (RME) always exists; this also holds in settings where buyers may retain unspent money. We provide theoretical analysis of the allocative properties of RME in terms of envy and regret. Though RME are hard to compute for general uncertainty sets, we consider some natural and tractable uncertainty sets which lead to well behaved formulations of the problem that can be solved via modern convex programming methods. Finally, we show that very mild uncertainty about valuations can cause RME allocations to outperform those which take estimates as having no underlying uncertainty.


2010 ◽  
Vol 39 ◽  
pp. 269-300 ◽  
Author(s):  
V. Bulitko ◽  
Y. Björnsson ◽  
R. Lawrence

Real-time heuristic search algorithms satisfy a constant bound on the amount of planning per action, independent of problem size. As a result, they scale up well as problems become larger. This property would make them well suited for video games where Artificial Intelligence controlled agents must react quickly to user commands and to other agents' actions. On the downside, real-time search algorithms employ learning methods that frequently lead to poor solution quality and cause the agent to appear irrational by re-visiting the same problem states repeatedly. The situation changed recently with a new algorithm, D LRTA*, which attempted to eliminate learning by automatically selecting subgoals. D LRTA* is well poised for video games, except it has a complex and memory-demanding pre-computation phase during which it builds a database of subgoals. In this paper, we propose a simpler and more memory-efficient way of pre-computing subgoals thereby eliminating the main obstacle to applying state-of-the-art real-time search methods in video games. The new algorithm solves a number of randomly chosen problems off-line, compresses the solutions into a series of subgoals and stores them in a database. When presented with a novel problem on-line, it queries the database for the most similar previously solved case and uses its subgoals to solve the problem. In the domain of pathfinding on four large video game maps, the new algorithm delivers solutions eight times better while using 57 times less memory and requiring 14% less pre-computation time.


Sign in / Sign up

Export Citation Format

Share Document