scholarly journals Improving the Smoothed Complexity of FLIP for Max Cut Problems

2021 ◽  
Vol 17 (3) ◽  
pp. 1-38
Author(s):  
Ali Bibak ◽  
Charles Carlson ◽  
Karthekeyan Chandrasekaran

Finding locally optimal solutions for MAX-CUT and MAX- k -CUT are well-known PLS-complete problems. An instinctive approach to finding such a locally optimum solution is the FLIP method. Even though FLIP requires exponential time in worst-case instances, it tends to terminate quickly in practical instances. To explain this discrepancy, the run-time of FLIP has been studied in the smoothed complexity framework. Etscheid and Röglin (ACM Transactions on Algorithms, 2017) showed that the smoothed complexity of FLIP for max-cut in arbitrary graphs is quasi-polynomial. Angel, Bubeck, Peres, and Wei (STOC, 2017) showed that the smoothed complexity of FLIP for max-cut in complete graphs is ( O Φ 5 n 15.1 ), where Φ is an upper bound on the random edge-weight density and Φ is the number of vertices in the input graph. While Angel, Bubeck, Peres, and Wei’s result showed the first polynomial smoothed complexity, they also conjectured that their run-time bound is far from optimal. In this work, we make substantial progress toward improving the run-time bound. We prove that the smoothed complexity of FLIP for max-cut in complete graphs is O (Φ n 7.83 ). Our results are based on a carefully chosen matrix whose rank captures the run-time of the method along with improved rank bounds for this matrix and an improved union bound based on this matrix. In addition, our techniques provide a general framework for analyzing FLIP in the smoothed framework. We illustrate this general framework by showing that the smoothed complexity of FLIP for MAX-3-CUT in complete graphs is polynomial and for MAX - k - CUT in arbitrary graphs is quasi-polynomial. We believe that our techniques should also be of interest toward showing smoothed polynomial complexity of FLIP for MAX - k - CUT in complete graphs for larger constants k .

2013 ◽  
Vol 48 ◽  
pp. 231-252 ◽  
Author(s):  
I. P. Gent

I prove that an implementation technique for scanning lists in backtracking search algorithms is optimal. The result applies to a simple general framework, which I present: applications include watched literal unit propagation in SAT and a number of examples in constraint satisfaction. Techniques like watched literals are known to be highly space efficient and effective in practice. When implemented in the `circular' approach described here, these techniques also have optimal run time per branch in big-O terms when amortized across a search tree. This also applies when multiple list elements must be found. The constant factor overhead of the worst case is only 2. Replacing the existing non-optimal implementation of unit propagation in MiniSat speeds up propagation by 29%, though this is not enough to improve overall run time significantly.


Author(s):  
Satyakiran Munaga ◽  
Francky Catthoor

Advanced technologies such as sub-45nm CMOS and 3D integration are known to have more accelerated and increased number of reliability failure mechanisms. Classical reliability assessment methodology, which assumes ad-hoc failure criteria and worst-case for all influencing dynamic aspects, is no longer viable in these technologies. In this paper, the authors advocate that managing temperature and reliability at run-time is necessary to overcome this reliability-wall without incurring significant cost penalty. Nonlinear nature of modern systems, however, makes the run-time control very challenging. The authors suggest that full cost-consciousness requires a truly proactive controller that can efficiently manage system slack with future in perspective. This paper introduces the concept of “gas-pedal,” which enhances the effectiveness of the proactive controller in minimizing the cost without sacrificing the hard guarantees required by the constraints. Reliability-aware dynamic energy management of a processor running AVC motion compensation task is used as a motivational case study to illustrate the proposed concepts.


2019 ◽  
Vol 9 (1) ◽  
pp. 5
Author(s):  
Mini Jayakrishnan ◽  
Alan Chang ◽  
Tony Tae-Hyoung Kim

Energy efficient semiconductor chips are in high demand to cater the needs of today’s smart products. Advanced technology nodes insert high design margins to deal with rising variations at the cost of power, area and performance. Existing run time resilience techniques are not cost effective due to the additional circuits involved. In this paper, we propose a design time resilience technique using a clock stretched flip-flop to redistribute the available slack in the processor pipeline to the critical paths. We use the opportunistic slack to redesign the critical fan in logic using logic reshaping, better than worst case sigma corner libraries and multi-bit flip-flops to achieve power and area savings. Experimental results prove that we can tune the logic and the library to get significant power and area savings of 69% and 15% in the execute pipeline stage of the processor compared to the traditional worst-case design. Whereas, existing run time resilience hardware results in 36% and 2% power and area overhead respectively.


Author(s):  
Jia Xu

In most embedded, real-time applications, processes need to satisfy various important constraints and dependencies, such as release times, offsets, precedence relations, and exclusion relations. Embedded, real-time systems with high assurance requirements often must execute many different types of processes with such constraints and dependencies. Some of the processes may be periodic and some of them may be asynchronous. Some of the processes may have hard deadlines and some of them may have soft deadlines. For some of the processes, especially the hard real-time processes, complete knowledge about their characteristics can and must be acquired before run-time. For other processes, prior knowledge of their worst case computation time and their data requirements may not be available. It is important for many embedded real-time systems to be able to simultaneously satisfy as many important constraints and dependencies as possible for as many different types of processes as possible. In this paper, we discuss what types of important constraints and dependencies can be satisfied among what types of processes. We also present a method which guarantees that, for every process, no matter whether it is periodic or asynchronous, and no matter whether it has a hard deadline or a soft deadline, as long as the characteristics of that process are known before run-time, then that process will be guaranteed to be completed before predetermined time limits, while simultaneously satisfying many important constraints and dependencies with other processes.


2014 ◽  
Vol 513-517 ◽  
pp. 539-544
Author(s):  
Chun Yang Zhang ◽  
Jun Fu Li ◽  
Qian Xu

Variety of routing approaches are employed by global routers in the VLSI circuit designs. Rip-up and reroute, as a conveniently implemented method, is widely used in most of modern global routers. Maze algorithm is always performed iteratively as the final technique to eliminate overflow. Maze algorithm and its ramifications can obtain an optimum solution. However, it will cost much CPU time if being used impertinently. In this work, we present a global router called Bottom-Up Router (BU-Router), with an optimized maze algorithm, which is based on multi-source multi-sink maze. BU-Router processes not the nets but the segments of nets in a sequence ordered by the length. In the progress, segments will be fixed on the global route graph edge, when the edge is saturated, which is as the basis, also known as bottom. Then the edge will be set as a blockage, which wont accept path goes through it any more. This means the edge will push the possible congestion in the future. Besides this, BU-Router optimized cost function in two ways: make the function adaptive and congestion center avoidable. Additionally, a specific optimized maze algorithm is proposed for routing a long distance segment so as to reduce the run-time.


2021 ◽  
Author(s):  
Dimple Sharma ◽  
Lev Kirischian ◽  
Valeri Kirischian

Systems for application domains like robotics, aerospace, defense, autonomous vehicles, etc. are usually developed on System-on-Programmable Chip (SoPC) platforms, capable of supporting several multi-modal computation-intensive tasks on their FPGAs. Since such systems are mostly autonomous and mobile, they have rechargeable power sources and therefore, varying power budgets. They may also develop hardware faults due to radiation, thermal cycling, aging, etc. Systems must be able to sustain the performance requirements of their multi-task multi-modal workload in the presence of variations in available power or occurrence of hardware faults. This paper presents an approach for mitigating power budget variations and hardware faults (transient and permanent) by run-time structural adaptation of the SoPC. The proposed method is based on dynamically allocating, relocating and re-integrating task-specific processing circuits inside the partially reconfigurable FPGA to accommodate the available power budget, satisfy tasks’ performances and hardware resource constraints, and/or to restore task functionality affected by hardware faults. The proposed method has been experimentally implemented on the ARM Cortex-A9 processor of Xilinx Zynq XC7Z020 FPGA. Results have shown that structural adaptation can be done in units of milliseconds since the worst-case decision-making process does not exceed the reconfiguration time of a partial bit-stream.


Author(s):  
A. H. Milburn

One of the most technically challenging reactor decommissioning projects in the UK, if not the world, is being tackled in a new way managed by a team lead by the United Kingdom Atomic Energy Authority. Windscale Pile 1, a graphite moderated, air cooled, horizontal, natural uranium fuelled reactor was damaged by fire in October 1957. De-fuelling, initial clean-up and isolation operations were carried out in the 1960’s. During the 1980’s and 90’s a successful Phase1 decommissioning campaign resulted in the plant being cleared of all accessible fuel and graphite debris and it being sealed and isolated from associated facilities and put on a monitoring and surveillance regime while plans for dismantling were being developed. For years intrusive inspection of the fire damaged region has been precluded on safety grounds. Consequently early plans for dismantling were constructed using pessimistic assumptions and worst case predictions. This in turn lead to technical, financial and regulatory hurdles which were found to be too high to overcome. The new approach utilises the best from several areas: • The design process incorporates principles of the US DoE safety analysis process to address safety, and adds further key stages of design concept and detail to generate concurrent development of a technical solution and a safety case. • A staged and gated Project Management Process provides for stakeholder involvement and consensus at key stages. • Targeted knowledge acquisition is used to minimise uncertainty. • A stepwise approach to intrusive surveys is employed to systematically increase confidence. The result is a process which yields the optimum solution in terms of safety, environmental impact, technical feasibility, political acceptability and affordability. The change from previous approaches is that the project starts from the hazards and associated hazard management strategies, through engineering concept, to design manufacture and testing of the resulting solution rather than starting with the engineer’s “good idea” and then trying to make it work, safely and at an affordable price. Progress has been made in making the intrusive survey work a reality. This is a significant step in building a realistic picture of the physical and radiological state of the core and in building confidence in the process.


2009 ◽  
pp. 550-569
Author(s):  
Praveen Madiraju ◽  
Rajshekhar Sunderraman ◽  
Shamkant B. Navathe ◽  
Haibin Wang

Global semantic integrity constraints ensure integrity and consistency of data spanning multiple databases. In this paper, we take initial steps towards representing global semantic integrity constraints for XML databases. We also provide a general framework for checking global semantic integrity constraints for XML databases. Furthermore, we set forth an efficient algorithm for checking global semantic integrity constraints across multiple XML databases. Our algorithm is efficient for three reasons: (1) the algorithm does not require the update statement to be executed before the constraint check is carried out; hence, we avoid any potential problems associated with rollbacks, (2) sub constraint checks are executed in parallel, and (3) most of the processing of algorithm could happen at compile time; hence, we save time spent at run-time. As a proof of concept, we present a prototype of the system implementing the ideas discussed in this paper.


2016 ◽  
Vol 2016.12 (0) ◽  
pp. 2113
Author(s):  
Masao ARAKAWA ◽  
Hiroshi YAMAKAWA
Keyword(s):  

Author(s):  
Jia Xu

Many embedded systems applications have hard timing requirements where real-time processes with precedence and exclusion relations must be completed before specified deadlines. This requires that the worst-case computation times of the real-time processes be estimated with sufficient precision during system design, which sometimes can be difficult in practice. If the actual computation time of a real-time process during run-time exceeds the estimated worst-case computation time, an overrun will occur, which may cause the real-time process to not only miss its own deadline, but also cause a cascade of other real-time processes to also miss their deadline, possibly resulting in total system failure. However, if the actual computation time of a real-time process during run-time is less than the estimated worst-case computation time, an underrun will occur, which may result in under-utilization of system resources. This paper describes a method for handling underruns and overruns when scheduling a set of real-time processes with precedence and exclusion relations using a pre-run-time schedule. The technique effectively tracks and utilizes unused processor time resources to reduce the chances of missing real-time process deadlines, thereby providing the capability to significantly increase both system utilization and system robustness in the presence of inaccurate estimates of the worst-case computation times of real-time processes.


Sign in / Sign up

Export Citation Format

Share Document