Study on Robustness of Optimum Solution by Assuming Worst Case

2016 ◽  
Vol 2016.12 (0) ◽  
pp. 2113
Author(s):  
Masao ARAKAWA ◽  
Hiroshi YAMAKAWA
Keyword(s):  
2021 ◽  
Vol 17 (3) ◽  
pp. 1-38
Author(s):  
Ali Bibak ◽  
Charles Carlson ◽  
Karthekeyan Chandrasekaran

Finding locally optimal solutions for MAX-CUT and MAX- k -CUT are well-known PLS-complete problems. An instinctive approach to finding such a locally optimum solution is the FLIP method. Even though FLIP requires exponential time in worst-case instances, it tends to terminate quickly in practical instances. To explain this discrepancy, the run-time of FLIP has been studied in the smoothed complexity framework. Etscheid and Röglin (ACM Transactions on Algorithms, 2017) showed that the smoothed complexity of FLIP for max-cut in arbitrary graphs is quasi-polynomial. Angel, Bubeck, Peres, and Wei (STOC, 2017) showed that the smoothed complexity of FLIP for max-cut in complete graphs is ( O Φ 5 n 15.1 ), where Φ is an upper bound on the random edge-weight density and Φ is the number of vertices in the input graph. While Angel, Bubeck, Peres, and Wei’s result showed the first polynomial smoothed complexity, they also conjectured that their run-time bound is far from optimal. In this work, we make substantial progress toward improving the run-time bound. We prove that the smoothed complexity of FLIP for max-cut in complete graphs is O (Φ n 7.83 ). Our results are based on a carefully chosen matrix whose rank captures the run-time of the method along with improved rank bounds for this matrix and an improved union bound based on this matrix. In addition, our techniques provide a general framework for analyzing FLIP in the smoothed framework. We illustrate this general framework by showing that the smoothed complexity of FLIP for MAX-3-CUT in complete graphs is polynomial and for MAX - k - CUT in arbitrary graphs is quasi-polynomial. We believe that our techniques should also be of interest toward showing smoothed polynomial complexity of FLIP for MAX - k - CUT in complete graphs for larger constants k .


Author(s):  
A. H. Milburn

One of the most technically challenging reactor decommissioning projects in the UK, if not the world, is being tackled in a new way managed by a team lead by the United Kingdom Atomic Energy Authority. Windscale Pile 1, a graphite moderated, air cooled, horizontal, natural uranium fuelled reactor was damaged by fire in October 1957. De-fuelling, initial clean-up and isolation operations were carried out in the 1960’s. During the 1980’s and 90’s a successful Phase1 decommissioning campaign resulted in the plant being cleared of all accessible fuel and graphite debris and it being sealed and isolated from associated facilities and put on a monitoring and surveillance regime while plans for dismantling were being developed. For years intrusive inspection of the fire damaged region has been precluded on safety grounds. Consequently early plans for dismantling were constructed using pessimistic assumptions and worst case predictions. This in turn lead to technical, financial and regulatory hurdles which were found to be too high to overcome. The new approach utilises the best from several areas: • The design process incorporates principles of the US DoE safety analysis process to address safety, and adds further key stages of design concept and detail to generate concurrent development of a technical solution and a safety case. • A staged and gated Project Management Process provides for stakeholder involvement and consensus at key stages. • Targeted knowledge acquisition is used to minimise uncertainty. • A stepwise approach to intrusive surveys is employed to systematically increase confidence. The result is a process which yields the optimum solution in terms of safety, environmental impact, technical feasibility, political acceptability and affordability. The change from previous approaches is that the project starts from the hazards and associated hazard management strategies, through engineering concept, to design manufacture and testing of the resulting solution rather than starting with the engineer’s “good idea” and then trying to make it work, safely and at an affordable price. Progress has been made in making the intrusive survey work a reality. This is a significant step in building a realistic picture of the physical and radiological state of the core and in building confidence in the process.


2021 ◽  
Author(s):  
KESHAVA PRASAD HALEMANE

The Symmetric Primal-Dual Symplex Pivot Decision Strategy (spdspds) is a novel iterative algorithm to solve linear programming problems. Here, a symplex pivoting operation is considered simply as an exchange between a basic (dependent) variable and a non-basic (independent) variable, in the Tucker's Compact Symmetric Tableau (CST) which is a unique symmetric representation common to both the primal as well as the dual of a linear programming problem in its standard canonical form. From this viewpoint, the classical simplex pivoting operation of Dantzig may be considered as a restricted special case. The infeasibility index associated with a symplex tableau is defined as the sum of the number of primal variables and the number of dual variables, which are infeasible. A measure of goodness as a global effectiveness measure of a pivot selection is defined/determined as/by the decrease in the infeasibility index associated with such a pivot selection. At each iteration the selection of the symplex pivot element is governed by the anticipated decrease in the infeasibility index - seeking the best possible decrease in the infeasibility index - from among a wide range of candidate choices with non-zero values - limited only by considerations of potential numerical instability. The algorithm terminates when further reduction in the infeasibility index is not possible; then the tableau is checked for the terminal tableau type to facilitate the problem classification - a termination with an infeasibility index of zero indicates optimum solution. The worst case computational complexity of spdspds is shown to be O(L^1.5).


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Phinit Tontragunrat ◽  
Sujin Bureerat

Practical optimum design of structures often involves parameters with uncertainties. There have been several ways to deal with such optimisation problems, and one of the approaches is an antioptimisation process. The task is to find the optimum solution of common design variables while simultaneously searching for the worst case scenario of those parameters with uncertainties. This paper proposed a metaheuristic based on population-based incremental learning (PBIL) for solving antioptimisation of trusses. The new algorithm is called two-level PBIL which consists of outer and inner loops. Five antioptimisation problems are posed to test the optimiser performance. The numerical results show that the concept of using PBIL probability vectors for handling the antioptimisation of truss is powerful and effective. The two-level PBIL can be considered a powerful optimiser for antioptimisation of trusses.


Author(s):  
J.D. Geller ◽  
C.R. Herrington

The minimum magnification for which an image can be acquired is determined by the design and implementation of the electron optical column and the scanning and display electronics. It is also a function of the working distance and, possibly, the accelerating voltage. For secondary and backscattered electron images there are usually no other limiting factors. However, for x-ray maps there are further considerations. The energy-dispersive x-ray spectrometers (EDS) have a much larger solid angle of detection that for WDS. They also do not suffer from Bragg’s Law focusing effects which limit the angular range and focusing distance from the diffracting crystal. In practical terms EDS maps can be acquired at the lowest magnification of the SEM, assuming the collimator does not cutoff the x-ray signal. For WDS the focusing properties of the crystal limits the angular range of acceptance of the incident x-radiation. The range is dependent upon the 2d spacing of the crystal, with the acceptance angle increasing with 2d spacing. The natural line width of the x-ray also plays a role. For the metal layered crystals used to diffract soft x-rays, such as Be - O, the minimum magnification is approximately 100X. In the worst case, for the LEF crystal which diffracts Ti - Zn, ˜1000X is the minimum.


2008 ◽  
Author(s):  
Sonia Savelli ◽  
Susan Joslyn ◽  
Limor Nadav-Greenberg ◽  
Queena Chen

Author(s):  
Akira YAMAWAKI ◽  
Hiroshi KAMABE ◽  
Shan LU
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document