Welfare-maximizing transmission capacity expansion under uncertainty

Author(s):  
S. Wogrin ◽  
D. Tejada-Arango ◽  
A. Downward ◽  
A.B. Philpott

We apply the JuDGE optimization package to a multistage stochastic leader–follower model that determines a transmission capacity expansion plan to maximize expected social welfare of consumers and producers who act as Cournot oligopolists in each time period. The problem is formulated as a large-scale mixed integer programme and applied to a 5-bus instance over scenario trees of varying size. The computational effort required by JuDGE is compared with solving the deterministic equivalent mixed integer programme using a state-of-the-art integer programming package. This article is part of the theme issue ‘The mathematics of energy systems’.

Author(s):  
Jiaxin Wu ◽  
Pingfeng Wang

Abstract With the growth of complexity and extent, large scale interconnected network systems, e.g., transportation networks or infrastructure networks, become more vulnerable towards external disturbances. Hence, managing potential disruptive events during design, operating, and recovery phase of an engineered system therefore improving the system’s resilience is an important yet challenging task. In order to ensure system resilience after the occurrence of failure events, this study proposes a mixed integer linear programming (MILP) based restoration framework using heterogenous dispatchable agents. Scenario based stochastic optimization (SO) technique is adopted to deal with the inherent uncertainties imposed on the recovery process from the nature. Moreover, different from conventional SO using deterministic equivalent formulations, additional risk measure is implemented for this study because of the temporal sparsity of the decision making in applications such as the recovery from extreme events. The resulting restoration framework involves with a large-scale MILP problem and thus an adequate decompaction technique, i.e., modified Langragian Relaxation, is also proposed in order to achieve tractable time complexity. Case study results based on the IEEE 37-buses test feeder demonstrate the benefits of using the proposed framework for resilience improvement as well as the advantages of adopting SO formulations.


2019 ◽  
Vol 10 (2) ◽  
pp. 36
Author(s):  
Rula Hani Salman AlHalaseh ◽  
Aminul Islam ◽  
Rosni Bakar

This paper optimally solves the portfolio selection problem that consists of multi assets in a continuous time period to achieve the optimal trade-off between multi-objectives. In this paper, the Stochastic Goal Mixed Integer programming of Stoyan (2009) is extended. The empirical contributions of this research presented on extending the SGMIP model by adding information as a new factor that selects the portfolio elements. The information element used as a portfolio managing characteristics to see whether it is applicable for different problems. The data was collected on a daily basis for all the parameters of the individual stock. Brownian motion formula was used to predict the stock price in the future time period. SP framework used to capture numerous sources of uncertainty and to formulate the portfolio problem. The main challenge of this model is that it contains additional real-world objective and multi types of financial assets, which form a Mixed Integer Programming (MIP). This large-scale problem solved using Optimising Programming Language (OPL) and decomposition algorithm to improve the memory allocation and CPU time. A fascinating result was obtained from the portfolio algorithm design. The ESGMIP portfolio outperforms the Index portfolio return. Under uncertain environment, the availability of information rationalized the diversity when the dynamic portfolio invested in one financial instrument (stocks), and tend to be diversifiable when invested in more than one financial instrument (stock and bond). This work presents a novel extended SGMIP model to reach an optimal solution.


2020 ◽  
Author(s):  
Abhijith Mundanad Narayanan ◽  
Panagiotis Patrinos ◽  
Alexander Bertrand

AbstractChannel selection or electrode placement for neural decoding is a commonly encountered problem in electroencephalography (EEG). Since evaluating all possible channel combinations is usually infeasible, one usually has to settle for heuristic methods or convex approximations without optimality guarantees. To date, it remains unclear how large the gap is between the selection made by these approximate methods and the truly optimal selection. The goal of this paper is to quantify this optimality gap for several state-of-the-art channel selection methods in the context of least-squares based neural decoding. To this end, we reformulate the channel selection problem as a mixed-integer quadratic program (MIQP), which allows the use of efficient MIQP solvers to find the optimal channel combination in a feasible computation time for up to 100 candidate channels. As this reveals the exact solution to the combinatorial problem, it allows to quantify the performance losses when using state-of-the-art sub-optimal (yet faster) channel selection methods. In a context of auditory attention decoding, we find that a greedy channel selection based on the utility metric does not show a significant optimality gap compared to optimal channel selection, whereas other state-of-the-art greedy or l1-norm penalized methods do show a significant loss in performance. Furthermore, we demonstrate that the MIQP formulation also provides a natural way to incorporate topology constraints in the selection, e.g., for electrode placement in neuro-sensor networks with galvanic separation constraints. Furthermore, a combination of this utility-based greedy selection with an MIQP solver allows to perform a topology constrained electrode placement, even in large scale problems with more than 100 candidate positions.


2014 ◽  
Vol 678 ◽  
pp. 89-93
Author(s):  
Bai Xiao ◽  
Tian Li Cui ◽  
Gang Mu ◽  
Xiao Jing Dong ◽  
Dong Sheng Dang

Large-scale wind farms connected into main grid is accompanied by the reducing of the stability, reliability and security of the power system operation. So the risk cost becomes one of the important factors influencing the profit of transmission network expansion planning. This paper adopted the method of clustering wind power transmission. The objective function established in this paper reflecting the comprehensive benefits of transmission project defined the Pline (transmission capacity) as variable. In order to maximize the comprehensive benefits of the transmission project, the influence of transmission constructing cost, the congestion loss probably caused by a low transmission capacity as well as the cost of risk were taken into account. Calculation of risk is a very complex mixed integer nonlinear problem, so we used Monte-Carlo Simulation to solve this question. Example shoes that the proposed method can achieve optimal transmission capacity, transmission project benefit and planning.


2013 ◽  
Vol 221 (3) ◽  
pp. 190-200 ◽  
Author(s):  
Jörg-Tobias Kuhn ◽  
Thomas Kiefer

Several techniques have been developed in recent years to generate optimal large-scale assessments (LSAs) of student achievement. These techniques often represent a blend of procedures from such diverse fields as experimental design, combinatorial optimization, particle physics, or neural networks. However, despite the theoretical advances in the field, there still exists a surprising scarcity of well-documented test designs in which all factors that have guided design decisions are explicitly and clearly communicated. This paper therefore has two goals. First, a brief summary of relevant key terms, as well as experimental designs and automated test assembly routines in LSA, is given. Second, conceptual and methodological steps in designing the assessment of the Austrian educational standards in mathematics are described in detail. The test design was generated using a two-step procedure, starting at the item block level and continuing at the item level. Initially, a partially balanced incomplete item block design was generated using simulated annealing, whereas in a second step, items were assigned to the item blocks using mixed-integer linear optimization in combination with a shadow-test approach.


2018 ◽  
Vol 14 (12) ◽  
pp. 1915-1960 ◽  
Author(s):  
Rudolf Brázdil ◽  
Andrea Kiss ◽  
Jürg Luterbacher ◽  
David J. Nash ◽  
Ladislava Řezníčková

Abstract. The use of documentary evidence to investigate past climatic trends and events has become a recognised approach in recent decades. This contribution presents the state of the art in its application to droughts. The range of documentary evidence is very wide, including general annals, chronicles, memoirs and diaries kept by missionaries, travellers and those specifically interested in the weather; records kept by administrators tasked with keeping accounts and other financial and economic records; legal-administrative evidence; religious sources; letters; songs; newspapers and journals; pictographic evidence; chronograms; epigraphic evidence; early instrumental observations; society commentaries; and compilations and books. These are available from many parts of the world. This variety of documentary information is evaluated with respect to the reconstruction of hydroclimatic conditions (precipitation, drought frequency and drought indices). Documentary-based drought reconstructions are then addressed in terms of long-term spatio-temporal fluctuations, major drought events, relationships with external forcing and large-scale climate drivers, socio-economic impacts and human responses. Documentary-based drought series are also considered from the viewpoint of spatio-temporal variability for certain continents, and their employment together with hydroclimate reconstructions from other proxies (in particular tree rings) is discussed. Finally, conclusions are drawn, and challenges for the future use of documentary evidence in the study of droughts are presented.


Constraints ◽  
2021 ◽  
Author(s):  
Jana Koehler ◽  
Josef Bürgler ◽  
Urs Fontana ◽  
Etienne Fux ◽  
Florian Herzog ◽  
...  

AbstractCable trees are used in industrial products to transmit energy and information between different product parts. To this date, they are mostly assembled by humans and only few automated manufacturing solutions exist using complex robotic machines. For these machines, the wiring plan has to be translated into a wiring sequence of cable plugging operations to be followed by the machine. In this paper, we study and formalize the problem of deriving the optimal wiring sequence for a given layout of a cable tree. We summarize our investigations to model this cable tree wiring problem (CTW). as a traveling salesman problem with atomic, soft atomic, and disjunctive precedence constraints as well as tour-dependent edge costs such that it can be solved by state-of-the-art constraint programming (CP), Optimization Modulo Theories (OMT), and mixed-integer programming (MIP). solvers. It is further shown, how the CTW problem can be viewed as a soft version of the coupled tasks scheduling problem. We discuss various modeling variants for the problem, prove its NP-hardness, and empirically compare CP, OMT, and MIP solvers on a benchmark set of 278 instances. The complete benchmark set with all models and instance data is available on github and was included in the MiniZinc challenge 2020.


2021 ◽  
Vol 7 (3) ◽  
pp. 50
Author(s):  
Anselmo Ferreira ◽  
Ehsan Nowroozi ◽  
Mauro Barni

The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, printing and scanning can be used to hide the traces of image manipulation or the synthetic nature of images, since the artifacts commonly found in manipulated and synthetic images are gone after the images are printed and scanned. A problem hindering research in this area is the lack of large scale reference datasets to be used for algorithm development and benchmarking. Motivated by this issue, we present a new dataset composed of a large number of synthetic and natural printed face images. To highlight the difficulties associated with the analysis of the images of the dataset, we carried out an extensive set of experiments comparing several printer attribution methods. We also verified that state-of-the-art methods to distinguish natural and synthetic face images fail when applied to print and scanned images. We envision that the availability of the new dataset and the preliminary experiments we carried out will motivate and facilitate further research in this area.


2021 ◽  
Vol 40 (3) ◽  
pp. 1-13
Author(s):  
Lumin Yang ◽  
Jiajie Zhuang ◽  
Hongbo Fu ◽  
Xiangzhi Wei ◽  
Kun Zhou ◽  
...  

We introduce SketchGNN , a convolutional graph neural network for semantic segmentation and labeling of freehand vector sketches. We treat an input stroke-based sketch as a graph with nodes representing the sampled points along input strokes and edges encoding the stroke structure information. To predict the per-node labels, our SketchGNN uses graph convolution and a static-dynamic branching network architecture to extract the features at three levels, i.e., point-level, stroke-level, and sketch-level. SketchGNN significantly improves the accuracy of the state-of-the-art methods for semantic sketch segmentation (by 11.2% in the pixel-based metric and 18.2% in the component-based metric over a large-scale challenging SPG dataset) and has magnitudes fewer parameters than both image-based and sequence-based methods.


Sign in / Sign up

Export Citation Format

Share Document