scholarly journals On the minimum-norm solution of convex quadratic programming

Author(s):  
Saeed Ketabchi ◽  
Hossein Moosaei ◽  
Milan Hladik

We discuss some basic concepts and present a  numerical procedure  for  finding  the minimum-norm  solution  of  convex quadratic programs (QPs)  subject to linear  equality and inequality   constraints.   Our  approach is based on a  theorem of    alternatives  and  on a convenient  characterization of the solution set of convex QPs.  We   show  that this  problem can be reduced to a simple constrained minimization problem with     a once-differentiable convex  objective  function. We use finite termination of an appropriate  Newton's method to  solve this problem.  Numerical results show that the proposed method is efficient.

1987 ◽  
Vol 24 (4) ◽  
pp. 396-403 ◽  
Author(s):  
Ajith Kumar ◽  
William R. Dillon

The authors demonstrate a general, flexible constrained discrimination method for testing hypotheses about the segmentability of a target population using categorical descriptors when additional information is available. The method applies the principle of minimum discrimination information (MDI) to the estimation of multinomial probabilities under linear equality and inequality constraints.


1994 ◽  
Vol 6 (1) ◽  
pp. 161-180 ◽  
Author(s):  
Andrew H. Gee ◽  
Richard W. Prager

The often disappointing performance of optimizing neural networks can be partly attributed to the rather ad hoc manner in which problems are mapped onto them for solution. In this paper a rigorous mapping is described for quadratic 0-1 programming problems with linear equality and inequality constraints, this being the most general class of problem such networks can solve. The problem's constraints define a polyhedron P containing all the valid solution points, and the mapping guarantees strict confinement of the network's state vector to P. However, forcing convergence to a 0-1 point within P is shown to be generally intractable, rendering the Hopfield and similar models inapplicable to the vast majority of problems. A modification of the tabu learning technique is presented as a more coherent approach to general problem solving with neural networks. When tested on a collection of knapsack problems, the modified dynamics produced some very encouraging results.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Yuefang Lian ◽  
Jinchuan Zhou ◽  
Jingyong Tang ◽  
Zhongfeng Sun

1-bit compressing sensing (CS) is an important class of sparse optimization problems. This paper focuses on the stability theory for 1-bit CS with quadratic constraint. The model is rebuilt by reformulating sign measurements by linear equality and inequality constraints, and the quadratic constraint with noise is approximated by polytopes to any level of accuracy. A new concept called restricted weak RSP of a transposed sensing matrix with respect to the measurement vector is introduced. Our results show that this concept is a sufficient and necessary condition for the stability of 1-bit CS without noise and is a sufficient condition if the noise is available.


Sign in / Sign up

Export Citation Format

Share Document