scholarly journals Solving the Multiscenario Max-Min Knapsack Problem Exactly with Column Generation and Branch-and-Bound

2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
Telmo Pinto ◽  
Cláudio Alves ◽  
Raïd Mansi ◽  
José Valério de Carvalho

Despite other variants of the standard knapsack problem, very few solution approaches have been devised for the multiscenario max-min knapsack problem. The problem consists in finding the subset of items whose total profit is maximized under the worst possible scenario. In this paper, we describe an exact solution method based on column generation and branch-and-bound for this problem. Our approach relies on a reformulation of the standard compact integer programming model based on the Dantzig-Wolfe decomposition principle. The resulting model is potentially stronger than the original one since the corresponding pricing subproblem does not have the integrality property. The details of the reformulation are presented and analysed together with those concerning the column generation and branch-and-bound procedures. To evaluate the performance of our algorithm, we conducted extensive computational experiments on large scale benchmark instances, and we compared our results with other state-of-the-art approaches under similar circumstances. We focused in particular on different relevant aspects that allow an objective evaluation of the efficacy of our approach. From different standpoints, the branch-and-price algorithm proved to outperform the other state-of-the-art methods described so far in the literature.

2020 ◽  
Author(s):  
Samah Boukhari ◽  
Isma Dahmani ◽  
Mhand Hifi

In this paper, we propose to solve the knapsack problem with setups by combining mixed linear relaxation and local branching. The problem with setups can be seen as a generalization of 0–1 knapsack problem, where items belong to disjoint classes (or families) and can be selected only if the corresponding class is activated. The selection of a class involves setup costs and resource consumptions thus affecting both the objective function and the capacity constraint. The mixed linear relaxation can be viewed as driving problem, where it is solved by using a special blackbox solver while the local branching tries to enhance the solutions provided by adding a series of invalid / valid constraints. The performance of the proposed method is evaluated on benchmark instances of the literature and new large-scale instances. Its provided results are compared to those reached by the Cplex solver and the best methods available in the literature. New results have been reached.


2021 ◽  
pp. 1-18
Author(s):  
Salahaldeen Rababa ◽  
Amer Al-Badarneh

Large-scale datasets collected from heterogeneous sources often require a join operation to extract valuable information. MapReduce is an efficient programming model for processing large-scale data. However, it has some limitations in processing heterogeneous datasets. This is because of the large amount of redundant intermediate records that are transferred through the network. Several filtering techniques have been developed to improve the join performance, but they require multiple MapReduce jobs to process the input datasets. To address this issue, the adaptive filter-based join algorithms are presented in this paper. Specifically, three join algorithms are introduced to perform the processes of filters creation and redundant records elimination within a single MapReduce job. A cost analysis of the introduced join algorithms shows that the I/O cost is reduced compared to the state-of-the-art filter-based join algorithms. The performance of the join algorithms was evaluated in terms of the total execution time and the total amount of I/O data transferred. The experimental results show that the adaptive Bloom join, semi-adaptive intersection Bloom join, and adaptive intersection Bloom join decrease the total execution time by 30%, 25%, and 35%, respectively; and reduce the total amount of I/O data transferred by 18%, 25%, and 50%, respectively.


Author(s):  
Daniel Anderson ◽  
Gregor Hendel ◽  
Pierre Le Bodic ◽  
Merlin Viernickel

We propose a simple and general online method to measure the search progress within the Branch-and-Bound algorithm, from which we estimate the size of the remaining search tree. We then show how this information can help solvers algorithmically at runtime by designing a restart strategy for MixedInteger Programming (MIP) solvers that decides whether to restart the search based on the current estimate of the number of remaining nodes in the tree. We refer to this type of algorithm as clairvoyant. Our clairvoyant restart strategy outperforms a state-of-the-art solver on a large set of publicly available MIP benchmark instances. It is implemented in the MIP solver SCIP and will be available in future releases.


2018 ◽  
Vol 14 (12) ◽  
pp. 1915-1960 ◽  
Author(s):  
Rudolf Brázdil ◽  
Andrea Kiss ◽  
Jürg Luterbacher ◽  
David J. Nash ◽  
Ladislava Řezníčková

Abstract. The use of documentary evidence to investigate past climatic trends and events has become a recognised approach in recent decades. This contribution presents the state of the art in its application to droughts. The range of documentary evidence is very wide, including general annals, chronicles, memoirs and diaries kept by missionaries, travellers and those specifically interested in the weather; records kept by administrators tasked with keeping accounts and other financial and economic records; legal-administrative evidence; religious sources; letters; songs; newspapers and journals; pictographic evidence; chronograms; epigraphic evidence; early instrumental observations; society commentaries; and compilations and books. These are available from many parts of the world. This variety of documentary information is evaluated with respect to the reconstruction of hydroclimatic conditions (precipitation, drought frequency and drought indices). Documentary-based drought reconstructions are then addressed in terms of long-term spatio-temporal fluctuations, major drought events, relationships with external forcing and large-scale climate drivers, socio-economic impacts and human responses. Documentary-based drought series are also considered from the viewpoint of spatio-temporal variability for certain continents, and their employment together with hydroclimate reconstructions from other proxies (in particular tree rings) is discussed. Finally, conclusions are drawn, and challenges for the future use of documentary evidence in the study of droughts are presented.


2020 ◽  
Vol 4 ◽  
pp. 239784732097975
Author(s):  
Stéphanie Boué ◽  
Didier Goedertier ◽  
Julia Hoeng ◽  
Anita Iskandar ◽  
Arkadiusz K Kuczaj ◽  
...  

E-vapor products (EVP) have become popular alternatives for cigarette smokers who would otherwise continue to smoke. EVP research is challenging and complex, mostly because of the numerous and rapidly evolving technologies and designs as well as the multiplicity of e-liquid flavors and solvents available on the market. There is an urgent need to standardize all stages of EVP assessment, from the production of a reference product to e-vapor generation methods and from physicochemical characterization methods to nonclinical and clinical exposure studies. The objective of this review is to provide a detailed description of selected experimental setups and methods for EVP aerosol generation and collection and exposure systems for their in vitro and in vivo assessment. The focus is on the specificities of the product that constitute challenges and require development of ad hoc assessment frameworks, equipment, and methods. In so doing, this review aims to support further studies, objective evaluation, comparison, and verification of existing evidence, and, ultimately, formulation of standardized methods for testing EVPs.


2021 ◽  
Vol 7 (3) ◽  
pp. 50
Author(s):  
Anselmo Ferreira ◽  
Ehsan Nowroozi ◽  
Mauro Barni

The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, printing and scanning can be used to hide the traces of image manipulation or the synthetic nature of images, since the artifacts commonly found in manipulated and synthetic images are gone after the images are printed and scanned. A problem hindering research in this area is the lack of large scale reference datasets to be used for algorithm development and benchmarking. Motivated by this issue, we present a new dataset composed of a large number of synthetic and natural printed face images. To highlight the difficulties associated with the analysis of the images of the dataset, we carried out an extensive set of experiments comparing several printer attribution methods. We also verified that state-of-the-art methods to distinguish natural and synthetic face images fail when applied to print and scanned images. We envision that the availability of the new dataset and the preliminary experiments we carried out will motivate and facilitate further research in this area.


2021 ◽  
Vol 40 (3) ◽  
pp. 1-13
Author(s):  
Lumin Yang ◽  
Jiajie Zhuang ◽  
Hongbo Fu ◽  
Xiangzhi Wei ◽  
Kun Zhou ◽  
...  

We introduce SketchGNN , a convolutional graph neural network for semantic segmentation and labeling of freehand vector sketches. We treat an input stroke-based sketch as a graph with nodes representing the sampled points along input strokes and edges encoding the stroke structure information. To predict the per-node labels, our SketchGNN uses graph convolution and a static-dynamic branching network architecture to extract the features at three levels, i.e., point-level, stroke-level, and sketch-level. SketchGNN significantly improves the accuracy of the state-of-the-art methods for semantic sketch segmentation (by 11.2% in the pixel-based metric and 18.2% in the component-based metric over a large-scale challenging SPG dataset) and has magnitudes fewer parameters than both image-based and sequence-based methods.


Author(s):  
Anil S. Baslamisli ◽  
Partha Das ◽  
Hoang-An Le ◽  
Sezer Karaoglu ◽  
Theo Gevers

AbstractIn general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance variations. Therefore, in this paper, we propose to decompose the shading component into direct (illumination) and indirect shading (ambient light and shadows) subcomponents. The aim is to distinguish strong photometric effects from reflectance variations. An end-to-end deep convolutional neural network (ShadingNet) is proposed that operates in a fine-to-coarse manner with a specialized fusion and refinement unit exploiting the fine-grained shading model. It is designed to learn specific reflectance cues separated from specific photometric effects to analyze the disentanglement capability. A large-scale dataset of scene-level synthetic images of outdoor natural environments is provided with fine-grained intrinsic image ground-truths. Large scale experiments show that our approach using fine-grained shading decompositions outperforms state-of-the-art algorithms utilizing unified shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD datasets.


2020 ◽  
Vol 499 (2) ◽  
pp. 2934-2958
Author(s):  
A Richard-Laferrière ◽  
J Hlavacek-Larrondo ◽  
R S Nemmen ◽  
C L Rhea ◽  
G B Taylor ◽  
...  

ABSTRACT A variety of large-scale diffuse radio structures have been identified in many clusters with the advent of new state-of-the-art facilities in radio astronomy. Among these diffuse radio structures, radio mini-halos are found in the central regions of cool core clusters. Their origin is still unknown and they are challenging to discover; less than 30 have been published to date. Based on new VLA observations, we confirmed the mini-halo in the massive strong cool core cluster PKS 0745−191 (z = 0.1028) and discovered one in the massive cool core cluster MACS J1447.4+0827 (z = 0.3755). Furthermore, using a detailed analysis of all known mini-halos, we explore the relation between mini-halos and active galactic nucleus (AGN) feedback processes from the central galaxy. We find evidence of strong, previously unknown correlations between mini-halo radio power and X-ray cavity power, and between mini-halo and the central galaxy radio power related to the relativistic jets when spectrally decomposing the AGN radio emission into a component for past outbursts and one for ongoing accretion. Overall, our study indicates that mini-halos are directly connected to the central AGN in clusters, following previous suppositions. We hypothesize that AGN feedback may be one of the dominant mechanisms giving rise to mini-halos by injecting energy into the intra-cluster medium and reaccelerating an old population of particles, while sloshing motion may drive the overall shape of mini-halos inside cold fronts. AGN feedback may therefore not only play a vital role in offsetting cooling in cool core clusters, but may also play a fundamental role in re-energizing non-thermal particles in clusters.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1091
Author(s):  
Izaak Van Crombrugge ◽  
Rudi Penne ◽  
Steve Vanlanduit

Knowledge of precise camera poses is vital for multi-camera setups. Camera intrinsics can be obtained for each camera separately in lab conditions. For fixed multi-camera setups, the extrinsic calibration can only be done in situ. Usually, some markers are used, like checkerboards, requiring some level of overlap between cameras. In this work, we propose a method for cases with little or no overlap. Laser lines are projected on a plane (e.g., floor or wall) using a laser line projector. The pose of the plane and cameras is then optimized using bundle adjustment to match the lines seen by the cameras. To find the extrinsic calibration, only a partial overlap between the laser lines and the field of view of the cameras is needed. Real-world experiments were conducted both with and without overlapping fields of view, resulting in rotation errors below 0.5°. We show that the accuracy is comparable to other state-of-the-art methods while offering a more practical procedure. The method can also be used in large-scale applications and can be fully automated.


Sign in / Sign up

Export Citation Format

Share Document