quadratic program
Recently Published Documents


TOTAL DOCUMENTS

197
(FIVE YEARS 70)

H-INDEX

20
(FIVE YEARS 2)

2022 ◽  
Vol 2022 ◽  
pp. 1-8
Author(s):  
Qingsong Tang

A proper cluster is usually defined as maximally coherent groups from a set of objects using pairwise or more complicated similarities. In general hypergraphs, clustering problem refers to extraction of subhypergraphs with a higher internal density, for instance, maximal cliques in hypergraphs. The determination of clustering structure within hypergraphs is a significant problem in the area of data mining. Various works of detecting clusters on graphs and uniform hypergraphs have been published in the past decades. Recently, it has been shown that the maximum 1,2 -clique size in 1,2 -hypergraphs is related to the global maxima of a certain quadratic program based on the structure of the given nonuniform hypergraphs. In this paper, we first extend this result to relate strict local maxima of this program to certain maximal cliques including 2-cliques or 1,2 -cliques. We also explore the connection between edge-weighted clusters and strictly local optimum solutions of a class of polynomials resulting from nonuniform 1,2 -hypergraphs.


2021 ◽  
Author(s):  
Vincent Graber ◽  
Eugenio Schuster

Abstract ITER will be the first tokamak to sustain a fusion-producing, or burning, plasma. If the plasma temperature were to inadvertently rise in this burning regime, the positive correlation between temperature and the fusion reaction rate would establish a destabilizing positive feedback loop. Careful regulation of the plasma’s temperature and density, or burn control, is required to prevent these potentially reactor-damaging thermal excursions, neutralize disturbances and improve performance. In this work, a Lyapunov-based burn controller is designed using a full zero-dimensional nonlinear model. An adaptive estimator manages destabilizing uncertainties in the plasma confinement properties and the particle recycling conditions (caused by plasma-wall interactions). The controller regulates the plasma density with requests for deuterium and tritium particle injections. In ITER-like plasmas, the fusion-born alpha particles will primarily heat the plasma electrons, resulting in different electron and ion temperatures in the core. By considering separate response models for the electron and ion energies, the proposed controller can independently regulate the electron and ion temperatures by requesting that different amounts of auxiliary power be delivered to the electrons and ions. These two commands for a specific control effort (electron and ion heating) are sent to an actuator allocation module that optimally maps them to the heating actuators available to ITER: an electron cyclotron heating system (20 MW), an ion cyclotron heating system (20 MW), and two neutral beam injectors (16.5 MW each). Two different actuator allocators are presented in this work. The first actuator allocator finds the optimal mapping by solving a convex quadratic program that includes actuator saturation and rate limits. It is nonadaptive and assumes that the mapping between the commanded control efforts and the allocated actuators (i.e., the effector model) contains no uncertainties. The second actuator allocation module has an adaptive estimator to handle uncertainties in the effector model. This uncertainty includes actuator efficiencies, the fractions of neutral beam heating that are deposited into the plasma electrons and ions, and the tritium concentration of the fueling pellets. Furthermore, the adaptive allocator considers actuator dynamics (actuation lag) that contain uncertainty. This adaptive allocation algorithm is more computationally efficient than the aforementioned nonadaptive allocator because it is computed using dynamic update laws so that finding the solution to a static optimization problem is not required at every time step. A simulation study assesses the performance of the proposed adaptive burn controller augmented with each of the actuator allocation modules.


Author(s):  
Ali Adibi ◽  
Ehsan Salari

It has been recently shown that an additional therapeutic gain may be achieved if a radiotherapy plan is altered over the treatment course using a new treatment paradigm referred to in the literature as spatiotemporal fractionation. Because of the nonconvex and large-scale nature of the corresponding treatment plan optimization problem, the extent of the potential therapeutic gain that may be achieved from spatiotemporal fractionation has been investigated using stylized cancer cases to circumvent the arising computational challenges. This research aims at developing scalable optimization methods to obtain high-quality spatiotemporally fractionated plans with optimality bounds for clinical cancer cases. In particular, the treatment-planning problem is formulated as a quadratically constrained quadratic program and is solved to local optimality using a constraint-generation approach, in which each subproblem is solved using sequential linear/quadratic programming methods. To obtain optimality bounds, cutting-plane and column-generation methods are combined to solve the Lagrangian relaxation of the formulation. The performance of the developed methods are tested on deidentified clinical liver and prostate cancer cases. Results show that the proposed method is capable of achieving local-optimal spatiotemporally fractionated plans with an optimality gap of around 10%–12% for cancer cases tested in this study. Summary of Contribution: The design of spatiotemporally fractionated radiotherapy plans for clinical cancer cases gives rise to a class of nonconvex and large-scale quadratically constrained quadratic programming (QCQP) problems, the solution of which requires the development of efficient models and solution methods. To address the computational challenges posed by the large-scale and nonconvex nature of the problem, we employ large-scale optimization techniques to develop scalable solution methods that find local-optimal solutions along with optimality bounds. We test the performance of the proposed methods on deidentified clinical cancer cases. The proposed methods in this study can, in principle, be applied to solve other QCQP formulations, which commonly arise in several application domains, including graph theory, power systems, and signal processing.


Author(s):  
Youssef Hami ◽  
Chakir Loqman

This research is an optimal allocation of tasks to processors in order to minimize the total costs of execution and communication. This problem is called the Task Assignment Problem (TAP) with nonuniform communication costs. To solve the latter, the first step concerns the formulation of the problem by an equivalent zero-one quadratic program with a convex objective function using a convexification technique, based on the smallest eigenvalue. The second step concerns the application of the Continuous Hopfield Network (CHN) to solve the obtained problem. The calculation results are presented for the instances from the literature, compared to solutions obtained both the CPLEX solver and by the heuristic genetic algorithm, and show an improvement in the results obtained by applying only the CHN algorithm. We can see that the proposed approach evaluates the efficiency of the theoretical results and achieves the optimal solutions in a short calculation time.


Author(s):  
Max Klimm ◽  
Philipp Warode

We develop algorithms solving parametric flow problems with separable, continuous, piecewise quadratic, and strictly convex cost functions. The parameter to be considered is a common multiplier on the demand of all nodes. Our algorithms compute a family of flows that are each feasible for the respective demand and minimize the costs among the feasible flows for that demand. For single commodity networks with homogenous cost functions, our algorithm requires one matrix multiplication for the initialization, a rank 1 update for each nondegenerate step and the solution of a convex quadratic program for each degenerate step. For nonhomogeneous cost functions, the initialization requires the solution of a convex quadratic program instead. For multi-commodity networks, both the initialization and every step of the algorithm require the solution of a convex program. As each step is mirrored by a breakpoint in the output this yields output-polynomial algorithms in every case.


2021 ◽  
Author(s):  
Ernesto Hernandez-Hinojosa ◽  
Aykut Satici ◽  
Pranav A. Bhounsule

Abstract To walk over constrained environments, bipedal robots must meet concise control objectives of speed and foot placement. The decisions made at the current step need to factor in their effects over a time horizon. Such step-to-step control is formulated as a two-point boundary value problem (2-BVP). As the dimensionality of the biped increases, it becomes increasingly difficult to solve this 2-BVP in real-time. The common method to use a simple linearized model for real-time planning followed by mapping on the high dimensional model cannot capture the nonlinearities and leads to potentially poor performance for fast walking speeds. In this paper, we present a framework for real-time control based on using partial feedback linearization (PFL) for model reduction, followed by a data-driven approach to find a quadratic polynomial model for the 2-BVP. This simple step-to-step model along with constraints is then used to formulate and solve a quadratically constrained quadratic program to generate real-time control commands. We demonstrate the efficacy of the approach in simulation on a 5-link biped following a reference velocity profile and on a terrain with ditches. A video is here: https://youtu.be/-UL-wkv4XF8.


Author(s):  
Mike Gimelfarb ◽  
Scott Sanner ◽  
Chi-Guhn Lee

Learning from Demonstrations (LfD) is a powerful approach for incorporating advice from experts in the form of demonstrations. However, demonstrations often come from multiple sub-optimal experts with conflicting goals, rendering them difficult to incorporate effectively in online settings. To address this, we formulate a quadratic program whose solution yields an adaptive weighting over experts, that can be used to sample experts with relevant goals. In order to compare different source and target task goals safely, we model their uncertainty using normal-inverse-gamma priors, whose posteriors are learned from demonstrations using Bayesian neural networks with a shared encoder. Our resulting approach, which we call Bayesian Experience Reuse, can be applied for LfD in static and dynamic decision-making settings. We demonstrate its effectiveness for minimizing multi-modal functions, and optimizing a high-dimensional supply chain with cost uncertainty, where it is also shown to improve upon the performance of the demonstrators' policies.


Author(s):  
E. Alper Yıldırım

AbstractWe study convex relaxations of nonconvex quadratic programs. We identify a family of so-called feasibility preserving convex relaxations, which includes the well-known copositive and doubly nonnegative relaxations, with the property that the convex relaxation is feasible if and only if the nonconvex quadratic program is feasible. We observe that each convex relaxation in this family implicitly induces a convex underestimator of the objective function on the feasible region of the quadratic program. This alternative perspective on convex relaxations enables us to establish several useful properties of the corresponding convex underestimators. In particular, if the recession cone of the feasible region of the quadratic program does not contain any directions of negative curvature, we show that the convex underestimator arising from the copositive relaxation is precisely the convex envelope of the objective function of the quadratic program, strengthening Burer’s well-known result on the exactness of the copositive relaxation in the case of nonconvex quadratic programs. We also present an algorithmic recipe for constructing instances of quadratic programs with a finite optimal value but an unbounded relaxation for a rather large family of convex relaxations including the doubly nonnegative relaxation.


2021 ◽  
Author(s):  
Junsi Zhang

In this thesis, we formulate a new problem based on Max-Cut called Generalized Max-Cut. This problem requires a graph as input and two real numbers (a, b) where a > 0 and −a < b < a and outputs a number. The restriction on the pair (a, b) is to avoid trivializing the problem. We formulate a quadratic program for Generalized Max-Cut and relax it to a semi-definite program. Most algorithms in this thesis will require solving this semi-definite program. The main algorithm in this thesis is the 2-Dimensional Rounding algorithm, designed by Avidor and Zwick, with the restriction that the semi-definite program of the input graph must have 2-Dimensional solutions. This algorithm uses a factor of randomness, β ∈ [0, 1], that is dependent on the integer input to Generalized Max-Cut. We improve the performance of this algorithm by numerically finding better β.


Sign in / Sign up

Export Citation Format

Share Document