Stabilized Column Generation Via the Dynamic Separation of Aggregated Rows

Author(s):  
Luciano Costa ◽  
Claudio Contardo ◽  
Guy Desaulniers ◽  
Julian Yarkony

Column generation (CG) algorithms are well known to suffer from convergence issues due, mainly, to the degenerate structure of their master problem and the instability associated with the dual variables involved in the process. In the literature, several strategies have been proposed to overcome this issue. These techniques rely either on the modification of the standard CG algorithm or on some prior information about the set of dual optimal solutions. In this paper, we propose a new stabilization framework, which relies on the dynamic generation of aggregated rows from the CG master problem. To evaluate the performance of our method and its flexibility, we consider instances of three different problems, namely, vehicle routing with time windows (VRPTW), bin packing with conflicts (BPPC), and multiperson pose estimation (MPPEP). When solving the VRPTW, the proposed stabilized CG method yields significant improvements in terms of CPU time and number of iterations with respect to a standard CG algorithm. Huge reductions in CPU time are also achieved when solving the BPPC and the MPPEP. For the latter, our method has shown to be competitive when compared with a tailored method. Summary of Contribution: Column generation (CG) algorithms are among the most important and studied solution methods in operations research. CG algorithms are suitable to cope with large-scale problems arising from several real-life applications. The present paper proposes a generic stabilization framework to address two of the main issues found in a CG method: degeneracy in the master problem and massive instability of the dual variables. The newly devised method, called dynamic separation of aggregated rows (dyn-SAR), relies on an extended master problem that contains redundant constraints obtained by aggregating constraints from the original master problem formulation. This new formulation is solved in a column/row generation fashion. The efficacy of the proposed method is tested through an extensive experimental campaign, where we solve three different problems that differ considerably in terms of their constraints and objective function. Despite being a generic framework, dyn-SAR requires the embedded CG algorithm to be tailored to the application at hand.

Author(s):  
Mouad Morabit ◽  
Guy Desaulniers ◽  
Andrea Lodi

Column generation (CG) is widely used for solving large-scale optimization problems. This article presents a new approach based on a machine learning (ML) technique to accelerate CG. This approach, called column selection, applies a learned model to select a subset of the variables (columns) generated at each iteration of CG. The goal is to reduce the computing time spent reoptimizing the restricted master problem at each iteration by selecting the most promising columns. The effectiveness of the approach is demonstrated on two problems: the vehicle and crew scheduling problem and the vehicle routing problem with time windows. The ML model was able to generalize to instances of different sizes, yielding a gain in computing time of up to 30%.


Transport ◽  
2013 ◽  
Vol 31 (4) ◽  
pp. 389-407 ◽  
Author(s):  
Wenbin Hu ◽  
Bo Du ◽  
Ye Wu ◽  
Huangle Liang ◽  
Chao Peng ◽  
...  

The exact solution and heuristic solution have their own strengths and weaknesses on solving the Vehicle Routing Problems with Time Windows (VRPTW). This paper proposes a hybrid Column Generation Algorithm with Metaheuristic Optimization (CGAMO) to overcome their weaknesses. Firstly, a Modified Labelling Algorithm (MLA) in the sub-problem of path searching is analysed. And a search strategy in CGAMO based on the demand of sub-problem is proposed to improve the searching efficiency. While putting the paths found in the sub-problem into the main problems of CGAMO, the iterations may fall into endless loops. To avoid this problem and keep the main problems in a reasonable size, two conditions on saving the old paths in the main problem are used. These conditions enlarge the number of constraints considered in the iterations to strengthen the limits of dual variables. Through analysing the sub-problem, we can find many useless paths that have no effect on the objective function. Secondly, in order to reduce the number of useless paths and improve the efficiency, this paper proposes a heuristic optimization strategy of CGAMO for dual variables. It is supposed to accelerate the solving speed from the view of on the dual problem. Finally, extensive experiments show that CGAMO achieves a better performance than other state-of-the-art methods on solving VRPTW. The comparative experiments also present the parameters sensitivity analysis, including the different effects of MLA in the different path selection strategies, the characteristics and the applicable scopes of the two pathkeeping conditions in the main problem.


2010 ◽  
Vol 132 (5) ◽  
Author(s):  
S. Tosserams ◽  
M. Kokkolaras ◽  
L. F. P. Etman ◽  
J. E. Rooda

Analytical target cascading (ATC) is a method developed originally for translating system-level design targets to design specifications for the components that comprise the system. ATC has been shown to be useful for coordinating decomposition-based optimal system design. The traditional ATC formulation uses hierarchical problem decompositions, in which coordination is performed by communicating target and response values between parents and children. The hierarchical formulation may not be suitable for general multidisciplinary design optimization (MDO) problems. This paper presents a new ATC formulation that allows nonhierarchical target-response coupling between subproblems and introduces system-wide functions that depend on variables of two or more subproblems. Options to parallelize the subproblem optimizations are also provided, including a new bilevel coordination strategy that uses a master problem formulation. The new formulation increases the applicability of the ATC to both decomposition-based optimal system design and MDO. Moreover, it belongs to the class of augmented Lagrangian coordination methods, having thus convergence properties under standard convexity and continuity assumptions. A supersonic business jet design problem is used to demonstrate the flexibility and effectiveness of the presented formulation.


Author(s):  
Claude Fleury

This paper presents results from recent numerical experiments supported by theoretical arguments which indicate where are the limits of current optimization methods when applied to problems involving a large number of design variables as well as a large number of constraints, many of them being active. This is typical of optimal sizing problems with local stress constraints especially when composite materials are employed. It is shown that in both primal and dual methods the CPU time spent in the optimizer is related to the numerical effort needed to invert a symmetric positive definite matrix of size jact, jact being the effective number of active constraints, i.e. constraints associated with positive Lagrange multipliers. This CPU time varies with jact3. When the number m of constraints increases, jact has a tendency to grow, but there is a limit. Indeed another well known theoretical property is that the number of active constraints jact should not exceed the number of free primal variables iact, i.e. the number of variables that do not reach a lower or upper bound. This number iact is itself of course smaller than the real number of design variables n. This leads to the conclusion that for problems with many active constraints the CPU time could grow as fast as n3. With respect to m the increase in CPU time remains approximately linear. Some practical applications to real life industrial problems will be briefly shown: design optimisation of an aircraft composite wing with local buckling constraints and topology optimization of an engine pylon.


MISSION ◽  
2019 ◽  
pp. 54-57
Author(s):  
Marco Riglietta ◽  
Paolo Donadoni ◽  
Grazia Carbone ◽  
Caterina Pisoni ◽  
Franca Colombi ◽  
...  

In Italy, at the end of the 1970s, methadone hydrochloride was introduced for the treatment of opioid use disorder, in the form of a racemic mixture consisting of levomethadone and dextromethadone.In 2015 Levometadone was introduced, a new formulation marketed in Italy for the treatment of opioid use disorder in 2015.The article aims to bring the experience of an Italian Addiction Centre back to the use of this new formulation in the "real life" analyzing the efficacy, the trend of adverse events and pharmacological iterations in a context in which the treated population often uses besides the opiates, cocaine and alcohol, are burdened by a relevant physical and psychic comorbidity and frequently have a prescribed polypharmacy.


2021 ◽  
Vol 55 (1) ◽  
pp. 1-2
Author(s):  
Bhaskar Mitra

Neural networks with deep architectures have demonstrated significant performance improvements in computer vision, speech recognition, and natural language processing. The challenges in information retrieval (IR), however, are different from these other application areas. A common form of IR involves ranking of documents---or short passages---in response to keyword-based queries. Effective IR systems must deal with query-document vocabulary mismatch problem, by modeling relationships between different query and document terms and how they indicate relevance. Models should also consider lexical matches when the query contains rare terms---such as a person's name or a product model number---not seen during training, and to avoid retrieving semantically related but irrelevant results. In many real-life IR tasks, the retrieval involves extremely large collections---such as the document index of a commercial Web search engine---containing billions of documents. Efficient IR methods should take advantage of specialized IR data structures, such as inverted index, to efficiently retrieve from large collections. Given an information need, the IR system also mediates how much exposure an information artifact receives by deciding whether it should be displayed, and where it should be positioned, among other results. Exposure-aware IR systems may optimize for additional objectives, besides relevance, such as parity of exposure for retrieved items and content publishers. In this thesis, we present novel neural architectures and methods motivated by the specific needs and challenges of IR tasks. We ground our contributions with a detailed survey of the growing body of neural IR literature [Mitra and Craswell, 2018]. Our key contribution towards improving the effectiveness of deep ranking models is developing the Duet principle [Mitra et al., 2017] which emphasizes the importance of incorporating evidence based on both patterns of exact term matches and similarities between learned latent representations of query and document. To efficiently retrieve from large collections, we develop a framework to incorporate query term independence [Mitra et al., 2019] into any arbitrary deep model that enables large-scale precomputation and the use of inverted index for fast retrieval. In the context of stochastic ranking, we further develop optimization strategies for exposure-based objectives [Diaz et al., 2020]. Finally, this dissertation also summarizes our contributions towards benchmarking neural IR models in the presence of large training datasets [Craswell et al., 2019] and explores the application of neural methods to other IR tasks, such as query auto-completion.


Author(s):  
Lauren R. Kennedy-Metz ◽  
Roger D. Dias ◽  
Rithy Srey ◽  
Geoffrey C. Rance ◽  
Heather M. Conboy ◽  
...  

Objective This novel preliminary study sought to capture dynamic changes in heart rate variability (HRV) as a proxy for cognitive workload among perfusionists while operating the cardiopulmonary bypass (CPB) pump during real-life cardiac surgery. Background Estimations of operators’ cognitive workload states in naturalistic settings have been derived using noninvasive psychophysiological measures. Effective CPB pump operation by perfusionists is critical in maintaining the patient’s homeostasis during open-heart surgery. Investigation into dynamic cognitive workload fluctuations, and their relationship with performance, is lacking in the literature. Method HRV and self-reported cognitive workload were collected from three Board-certified cardiac perfusionists ( N = 23 cases). Five HRV components were analyzed in consecutive nonoverlapping 1-min windows from skin incision through sternal closure. Cases were annotated according to predetermined phases: prebypass, three phases during bypass, and postbypass. Values from all 1min time windows within each phase were averaged. Results Cognitive workload was at its highest during the time between initiating bypass and clamping the aorta (preclamp phase during bypass), and decreased over the course of the bypass period. Conclusion We identified dynamic, temporal fluctuations in HRV among perfusionists during cardiac surgery corresponding to subjective reports of cognitive workload. Not only does cognitive workload differ for perfusionists during bypass compared with pre- and postbypass phases, but differences in HRV were also detected within the three bypass phases. Application These preliminary findings suggest the preclamp phase of CPB pump interaction corresponds to higher cognitive workload, which may point to an area warranting further exploration using passive measurement.


Author(s):  
Krzysztof Jurczuk ◽  
Marcin Czajkowski ◽  
Marek Kretowski

AbstractThis paper concerns the evolutionary induction of decision trees (DT) for large-scale data. Such a global approach is one of the alternatives to the top-down inducers. It searches for the tree structure and tests simultaneously and thus gives improvements in the prediction and size of resulting classifiers in many situations. However, it is the population-based and iterative approach that can be too computationally demanding to apply for big data mining directly. The paper demonstrates that this barrier can be overcome by smart distributed/parallel processing. Moreover, we ask the question whether the global approach can truly compete with the greedy systems for large-scale data. For this purpose, we propose a novel multi-GPU approach. It incorporates the knowledge of global DT induction and evolutionary algorithm parallelization together with efficient utilization of memory and computing GPU’s resources. The searches for the tree structure and tests are performed simultaneously on a CPU, while the fitness calculations are delegated to GPUs. Data-parallel decomposition strategy and CUDA framework are applied. Experimental validation is performed on both artificial and real-life datasets. In both cases, the obtained acceleration is very satisfactory. The solution is able to process even billions of instances in a few hours on a single workstation equipped with 4 GPUs. The impact of data characteristics (size and dimension) on convergence and speedup of the evolutionary search is also shown. When the number of GPUs grows, nearly linear scalability is observed what suggests that data size boundaries for evolutionary DT mining are fading.


Author(s):  
Gianluca Bardaro ◽  
Alessio Antonini ◽  
Enrico Motta

AbstractOver the last two decades, several deployments of robots for in-house assistance of older adults have been trialled. However, these solutions are mostly prototypes and remain unused in real-life scenarios. In this work, we review the historical and current landscape of the field, to try and understand why robots have yet to succeed as personal assistants in daily life. Our analysis focuses on two complementary aspects: the capabilities of the physical platform and the logic of the deployment. The former analysis shows regularities in hardware configurations and functionalities, leading to the definition of a set of six application-level capabilities (exploration, identification, remote control, communication, manipulation, and digital situatedness). The latter focuses on the impact of robots on the daily life of users and categorises the deployment of robots for healthcare interventions using three types of services: support, mitigation, and response. Our investigation reveals that the value of healthcare interventions is limited by a stagnation of functionalities and a disconnection between the robotic platform and the design of the intervention. To address this issue, we propose a novel co-design toolkit, which uses an ecological framework for robot interventions in the healthcare domain. Our approach connects robot capabilities with known geriatric factors, to create a holistic view encompassing both the physical platform and the logic of the deployment. As a case study-based validation, we discuss the use of the toolkit in the pre-design of the robotic platform for an pilot intervention, part of the EU large-scale pilot of the EU H2020 GATEKEEPER project.


2021 ◽  
Vol 5 (1) ◽  
pp. 14
Author(s):  
Christos Makris ◽  
Georgios Pispirigos

Nowadays, due to the extensive use of information networks in a broad range of fields, e.g., bio-informatics, sociology, digital marketing, computer science, etc., graph theory applications have attracted significant scientific interest. Due to its apparent abstraction, community detection has become one of the most thoroughly studied graph partitioning problems. However, the existing algorithms principally propose iterative solutions of high polynomial order that repetitively require exhaustive analysis. These methods can undoubtedly be considered resource-wise overdemanding, unscalable, and inapplicable in big data graphs, such as today’s social networks. In this article, a novel, near-linear, and highly scalable community prediction methodology is introduced. Specifically, using a distributed, stacking-based model, which is built on plain network topology characteristics of bootstrap sampled subgraphs, the underlined community hierarchy of any given social network is efficiently extracted in spite of its size and density. The effectiveness of the proposed methodology has diligently been examined on numerous real-life social networks and proven superior to various similar approaches in terms of performance, stability, and accuracy.


Sign in / Sign up

Export Citation Format

Share Document