An efficient global algorithm for worst-case linear optimization under uncertainties based on nonlinear semidefinite relaxation

Author(s):  
Xiaodong Ding ◽  
Hezhi Luo ◽  
Huixian Wu ◽  
Jianzhen Liu
Filomat ◽  
2020 ◽  
Vol 34 (5) ◽  
pp. 1471-1486
Author(s):  
S. Fathi-Hafshejani ◽  
Reza Peyghami

In this paper, a primal-dual interior point algorithm for solving linear optimization problems based on a new kernel function with a trigonometric barrier term which is not only used for determining the search directions but also for measuring the distance between the given iterate and the ?-center for the algorithm is proposed. Using some simple analysis tools and prove that our algorithm based on the new proposed trigonometric kernel function meets O (?n log n log n/?) and O (?n log n/?) as the worst case complexity bounds for large and small-update methods. Finally, some numerical results of performing our algorithm are presented.


2009 ◽  
Vol 51 (2) ◽  
pp. 286-301
Author(s):  
M. SALAHI

AbstractIn this paper, using the framework of self-regularity, we propose a hybrid adaptive algorithm for the linear optimization problem. If the current iterates are far from a central path, the algorithm employs a self-regular search direction, otherwise the classical Newton search direction is employed. This feature of the algorithm allows us to prove a worst case iteration bound. Our result matches the best iteration bound obtained by the pure self-regular approach and improves on the worst case iteration bound of the classical algorithm.


2009 ◽  
Vol 26 (02) ◽  
pp. 235-256
Author(s):  
MAZIAR SALAHI ◽  
TAMÁS TERLAKY

Recently, using the framework of self-regularity, Salahi in his Ph.D. thesis proposed an adaptive single step algorithm which takes advantage of the current iterate information to find an appropriate barrier parameter rather than using a fixed fraction of the current duality gap. However, his algorithm might do at most one bad step after each good step in order to keep the iterate in a certain neighborhood of the central path. In this paper, using the same framework, we propose a hybrid adaptive algorithm. Depending on the position of the current iterate, our new algorithm uses either the classical Newton search direction or a self-regular search direction. The larger the distance from the central path, the larger the barrier degree of the self-regular search direction is. Unlike the classical approach, here we control the iterates by guaranteeing certain reduction of the proximity measure. This itself leads to a one dimensional equation which determines the target barrier parameter at each iteration. This allows us to have a large update algorithm without any need for safeguard or special steps. Finally, we prove that our hybrid adaptive algorithm has an [Formula: see text] worst case iteration complexity.


Author(s):  
Hezhi Luo ◽  
Xiaodong Ding ◽  
Jiming Peng ◽  
Rujun Jiang ◽  
Duan Li

In this paper, we consider the so-called worst-case linear optimization (WCLO) with uncertainties on the right-hand side of the constraints. Such a problem often arises in applications such as in systemic risk estimation in finance and stochastic optimization. We first show that the WCLO problem with the uncertainty set corresponding to the [Formula: see text]p-norm ((WCLOp)) is NP-hard for p ɛ (1,∞). Second, we combine several simple optimization techniques, such as the successive convex optimization method, quadratic convex relaxation, initialization, and branch-and-bound (B&B), to develop an algorithm for (WCLO2) that can find a globally optimal solution to (WCLO2) within a prespecified ε-tolerance. We establish the global convergence of the algorithm and estimate its complexity. We also develop a finite B&B algorithm for (WCLO∞) to identify a global optimal solution to the underlying problem, and establish the finite convergence of the algorithm. Numerical experiments are reported to illustrate the effectiveness of our proposed algorithms in finding globally optimal solutions to medium and large-scale WCLO instances.


Author(s):  
J.D. Geller ◽  
C.R. Herrington

The minimum magnification for which an image can be acquired is determined by the design and implementation of the electron optical column and the scanning and display electronics. It is also a function of the working distance and, possibly, the accelerating voltage. For secondary and backscattered electron images there are usually no other limiting factors. However, for x-ray maps there are further considerations. The energy-dispersive x-ray spectrometers (EDS) have a much larger solid angle of detection that for WDS. They also do not suffer from Bragg’s Law focusing effects which limit the angular range and focusing distance from the diffracting crystal. In practical terms EDS maps can be acquired at the lowest magnification of the SEM, assuming the collimator does not cutoff the x-ray signal. For WDS the focusing properties of the crystal limits the angular range of acceptance of the incident x-radiation. The range is dependent upon the 2d spacing of the crystal, with the acceptance angle increasing with 2d spacing. The natural line width of the x-ray also plays a role. For the metal layered crystals used to diffract soft x-rays, such as Be - O, the minimum magnification is approximately 100X. In the worst case, for the LEF crystal which diffracts Ti - Zn, ˜1000X is the minimum.


2013 ◽  
Vol 221 (3) ◽  
pp. 190-200 ◽  
Author(s):  
Jörg-Tobias Kuhn ◽  
Thomas Kiefer

Several techniques have been developed in recent years to generate optimal large-scale assessments (LSAs) of student achievement. These techniques often represent a blend of procedures from such diverse fields as experimental design, combinatorial optimization, particle physics, or neural networks. However, despite the theoretical advances in the field, there still exists a surprising scarcity of well-documented test designs in which all factors that have guided design decisions are explicitly and clearly communicated. This paper therefore has two goals. First, a brief summary of relevant key terms, as well as experimental designs and automated test assembly routines in LSA, is given. Second, conceptual and methodological steps in designing the assessment of the Austrian educational standards in mathematics are described in detail. The test design was generated using a two-step procedure, starting at the item block level and continuing at the item level. Initially, a partially balanced incomplete item block design was generated using simulated annealing, whereas in a second step, items were assigned to the item blocks using mixed-integer linear optimization in combination with a shadow-test approach.


2008 ◽  
Author(s):  
Sonia Savelli ◽  
Susan Joslyn ◽  
Limor Nadav-Greenberg ◽  
Queena Chen

Sign in / Sign up

Export Citation Format

Share Document