decomposition strategies
Recently Published Documents


TOTAL DOCUMENTS

91
(FIVE YEARS 23)

H-INDEX

16
(FIVE YEARS 1)

2022 ◽  
Vol 70 (1) ◽  
pp. 53-66
Author(s):  
Julian Grothoff ◽  
Nicolas Camargo Torres ◽  
Tobias Kleinert

Abstract Machine learning and particularly reinforcement learning methods may be applied to control tasks ranging from single control loops to the operation of whole production plants. However, their utilization in industrial contexts lacks understandability and requires suitable levels of operability and maintainability. In order to asses different application scenarios a simple measure for their complexity is proposed and evaluated on four examples in a simulated palette transport system of a cold rolling mill. The measure is based on the size of controller input and output space determined by different granularity levels in a hierarchical process control model. The impact of these decomposition strategies on system characteristics, especially operability and maintainability, are discussed, assuming solvability and a suitable quality of the reinforcement learning solution is provided.


Author(s):  
Saharnaz Mehrani ◽  
Carlos Cardonha ◽  
David Bergman

In the bin-packing problem with minimum color fragmentation (BPPMCF), we are given a fixed number of bins and a collection of items, each associated with a size and a color, and the goal is to avoid color fragmentation by packing items with the same color within as few bins as possible. This problem emerges in areas as diverse as surgical scheduling and group event seating. We present several optimization models for the BPPMCF, including baseline integer programming formulations, alternative integer programming formulations based on two recursive decomposition strategies that utilize decision diagrams, and a branch-and-price algorithm. Using the results from an extensive computational evaluation on synthetic instances, we train a decision tree model that predicts which algorithm should be chosen to solve a given instance of the problem based on a collection of derived features. Our insights are validated through experiments on the aforementioned applications on real-world data. Summary of Contribution: In this paper, we investigate a colored variant of the bin-packing problem. We present and evaluate several exact mixed-integer programming formulations to solve the problem, including models that explore recursive decomposition strategies based on decision diagrams and a set partitioning model that we solve using branch and price. Our results show that the computational performance of the algorithms depends on features of the input data, such as the average number of items per bin. Our algorithms and featured applications suggest that the problem is of practical relevance and that instances of reasonable size can be solved efficiently.


Geophysics ◽  
2021 ◽  
pp. 1-42
Author(s):  
Yike Liu ◽  
Yanbao Zhang ◽  
Yingcai Zheng

Multiples follow long paths and carry more information on the subsurface than primary reflections, making them particularly useful for imaging. However, seismic migration using multiples can generate crosstalk artifacts in the resulting images because multiples of different orders interfere with each others, and crosstalk artifacts greatly degrade the quality of an image. We propose to form a supergather by applying phase-encoding functions to image multiples and stacking several encoded controlled-order multiples. The multiples are separated into different orders using multiple decomposition strategies. The method is referred to as the phase-encoded migration of all-order multiples (PEM). The new migration can be performed by applying only two finite-difference solutions to the wave equation. The solutions include backward-extrapolating the blended virtual receiver data and forward-propagating the summed virtual source data. The proposed approach can significantly attenuate crosstalk artifacts and also significantly reduce computational costs. Numerical examples demonstrate that the PEM can remove relatively strong crosstalk artifacts generated by multiples and is a promising approach for imaging subsurface targets.


2021 ◽  
Author(s):  
Daniel Hulse ◽  
Hongyang Zhang ◽  
Christopher Hoyle

Abstract Optimizing a system’s resilience can be challenging, especially when it involves considering both the inherent resilience of a robust design and the active resilience of a health management system to a set of computationally-expensive hazard simulations. While prior work has developed specialized architectures to effectively and efficiently solve combined design and resilience optimization problems, the comparison of these architectures has been limited to a single case study. To further study resilience optimization formulations, this work develops a problem repository which includes previously-developed resilience optimization problems and additional problems presented in this work: a notional system resilience model, a pandemic response model, and a cooling tank hazard prevention model. This work then uses models in the repository at large to understand the characteristics of resilience optimization problems and study the applicability of optimization architectures and decomposition strategies. Based on the comparisons in the repository, applying an optimization architecture effectively requires understanding the alignment and coupling relationships between the design and resilience models, as well as the efficiency characteristics of the algorithms. While alignment determines the necessity of a surrogate of resilience cost in the upper-level design problem, coupling determines the overall applicability of a sequential, alternating, or bilevel structure. Additionally, the application of decomposition strategies is dependent on there being limited interactions between variable sets, which often does not hold when a resilience policy is parameterized in terms of actions to take in hazardous model states rather than specific given scenarios.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Maria Qurban ◽  
Xiang Zhang ◽  
Hafiza Mamona Nazir ◽  
Ijaz Hussain ◽  
Muhammad Faisal ◽  
...  

Accurate estimation of the mining process is vital for the optimal allocation of mineral resources. The development of any country is precisely connected with the management of mineral resources. Therefore, the forecasting of mineral resources contributed much to management, planning, and a maximum allocation of mineral resources. However, it is challenging because of its multiscale variability, nonlinearity, nonstationarity, and high irregularity. In this paper, we proposed two revised hybrid methods to address these issues to predict mineral resources. Our methods are based on denoising, decomposition, prediction, and ensemble principles that are applied to the production of mineral resource time-series data. The performance of the proposed methods is compared with the existing traditional one-stage model (without denoised and decomposition strategies) and two-stage hybrid models (based on denoised strategy), and three-stage hybrid models (with denoised and decomposition strategies). The performance of these methods is evaluated using mean relative error (MRE), mean absolute error (MAE), and mean square error (MSE) as evaluation measures for the production of four principle mineral resources of Pakistan. It is concluded that the proposed framework for the prediction of mineral resources indicated better performance as compared to other existing one-stage, two-stage, and three-stage models. Furthermore, the prediction accuracy of the revised hybrid model is improved by reducing the complexity of the production of mineral resource time-series data.


Author(s):  
Cunjing Ge ◽  
Armin Biere

Counting integer solutions of linear constraints has found interesting applications in various fields. It is equivalent to the problem of counting integer points inside a polytope. However, state-of-the-art algorithms for this problem become too slow for even a modest number of variables. In this paper, we propose new decomposition techniques which target both the elimination of variables as well as inequalities using structural properties of counting problems. Experiments on extensive benchmarks show that our algorithm improves the performance of state-of-the-art counting algorithms, while the overhead is usually negligible compared to the running time of integer counting.


2021 ◽  
Vol 1 ◽  
pp. 871-880
Author(s):  
Julie Milovanovic ◽  
John Gero ◽  
Kurt Becker

AbstractDesigners faced with complex design problems use decomposition strategies to tackle manageable sub-problems. Recomposition strategies aims at synthesizing sub-solutions into a unique design proposal. Design theory describes the design process as a combination of decomposition and recomposition strategies. In this paper, we explore dynamic patterns of decomposition and recomposition strategies of design teams. Data were collected from 9 teams of professional engineers. Using protocol analysis, we examined the dominance of decomposition and recomposition strategies over time and the correlations between each strategy and design processes such as analysis, synthesis, evaluation. We expected decomposition strategies to peak early in the design process and decay overtime. Instead, teams maintain decomposition and recomposition strategies consistently during the design process. We observed fast iteration of both strategies over a one hour-long design session. The research presented provides an empirical foundation to model the behaviour of professional engineering teams, and first insights to refine theoretical understanding of the use decomposition and recomposition strategies in design practice.


2021 ◽  
Vol 14 (11) ◽  
pp. 2167-2176
Author(s):  
Yang Li ◽  
Yu Shen ◽  
Wentao Zhang ◽  
Jiawei Jiang ◽  
Bolin Ding ◽  
...  

End-to-end AutoML has attracted intensive interests from both academia and industry, which automatically searches for ML pipelines in a space induced by feature engineering, algorithm/model selection, and hyper-parameter tuning. Existing AutoML systems, however, suffer from scalability issues when applying to application domains with large, high-dimensional search spaces. We present VOLCANOML, a scalable and extensible framework that facilitates systematic exploration of large AutoML search spaces. VOLCANOML introduces and implements basic building blocks that decompose a large search space into smaller ones, and allows users to utilize these building blocks to compose an execution plan for the AutoML problem at hand. VOLCANOML further supports a Volcano-style execution model - akin to the one supported by modern database systems - to execute the plan constructed. Our evaluation demonstrates that, not only does VOLCANOML raise the level of expressiveness for search space decomposition in AutoML, it also leads to actual findings of decomposition strategies that are significantly more efficient than the ones employed by state-of-the-art AutoML systems such as auto-sklearn.


Sign in / Sign up

Export Citation Format

Share Document