penalty parameter
Recently Published Documents


TOTAL DOCUMENTS

101
(FIVE YEARS 32)

H-INDEX

13
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Pengwen Chen

Abstract Phase retrieval can be expressed as a non-convex constrained optimization problem to identify one phase minimizer one a torus. Many iterative transform techniques have been proposed to identify the minimizer, e.g., relaxed averaged alternating reflections(RAAR) algorithms. In this paper, we present one optimization viewpoint on the RAAR algorithm. RAAR algorithm is one alternating direction method of multipliers(ADMM) with one penalty parameter. Pairing with multipliers (dual vectors), phase vectors on the primal space are lifted to higher-dimensional vectors, the RAAR algorithm is one continuation algorithm, which searches for local saddles in the primal-dual space. The dual iteration approximates one gradient ascent flow, which drives the corresponding local minimizers in a positive-definite Hessian region. Altering penalty parameters, the RAAR eliminates the stagnation of these corresponding local minimizers in the primal space and thus screens out many stationary points corresponding to non-local minimizers.


Geophysics ◽  
2021 ◽  
pp. 1-57
Author(s):  
Ali Gholami ◽  
Hossein S. Aghamiry ◽  
Stéphane Operto

The search space of Full Waveform Inversion (FWI) can be extended via a relaxation of the wave equation to increase the linear regime of the inversion. This wave equation relaxation is implemented by solving jointly (in a least-squares sense) the wave equation weighted by a penalty parameter and the observation equation such that the reconstructed wavefields closely match the data, hence preventing cycle skipping at receivers. Then, the subsurface parameters are updated by minimizing the temporal and spatial source extension generated by the wave-equation relaxation to push back the data-assimilated wavefields toward the physics.This extended formulation of FWI has been efficiently implemented in the frequency domain with the augmented Lagrangian method where the overdetermined systems of the data-assimilated wavefields can be solved separately for each frequency with linear algebra methods and the sensitivity of the optimization to the penalty parameter is mitigated through the action of the Lagrange multipliers.Applying this method in the time domain is however hampered by two main issues: the computation of data-assimilated wavefields with explicit time-stepping schemes and the storage of the Lagrange multipliers capturing the history of the source residuals in the state space.These two issues are solved by recognizing that the source residuals on the right-hand side of the extended wave equation, when formulated in a form suitable for explicit time stepping, are related to the extended data residuals through an adjoint equation.This relationship first allows us to relate the extended data residuals to the reduced data residuals through a normal equation in the data space. Once the extended data residuals have been estimated by solving (exactly or approximately) this normal equation, the data-assimilated wavefields are computed with explicit time stepping schemes by cascading an adjoint and a forward simulation.


Author(s):  
Petro Stetsyuk ◽  
Andreas Fischer ◽  
Olha Khomiak

A linear program can be equivalently reformulated as an unconstrained nonsmooth minimization problem, whose objective is the sum of the original objective and a penalty function with a sufficiently large penalty parameter. The article presents two methods for choosing this parameter. The first one applies to linear programs with usual linear inequality constraints. Then, we use a corresponding theorem by N.Z. Shor on the equivalence of a convex program to an unconstrained nonsmooth minimization problem. The second method is for linear programs of a special type. This means that all inequalities are of the form that a linear expression on the left-hand side is less or equal to a positive constant on the right-hand side. For this special type, we use a corresponding theorem of B.N. Pshenichny on establishing a penalty parameter for convex programs. For differently sized linear programs of the special type, we demonstrate that suitable penalty parameters can be computed by a procedure in GNU Octave based on GLPK software.


Author(s):  
Zhengxin Huang ◽  
Yuren Zhou ◽  
Chuan Luo ◽  
Qingwei Lin

Decomposition approach is an important component in multi-objective evolutionary algorithm based on decomposition (MOEA/D), which is a popular method for handing many-objective optimization problems (MaOPs). This paper presents a theoretical analysis on the convergence ability of using the typical weighted sum (WS), Tchebycheff (TCH) or penalty-based boundary intersection (PBI) approach in a basic MOEA/D for solving two benchmark MaOPs. The results show that using WS, the algorithm can always find an optimal solution for any subproblem in polynomial expected runtime. In contrast, the algorithm needs at least exponential expected runtime for some subproblems if using TCH or PBI. Moreover, our analyses discover an obvious shortcoming of using WS, that is, the optimal solutions of different subproblems are easily corresponding to the same solution. In addition, this analysis indicates that if using PBI, a small value of the penalty parameter is a good choice for faster converging to the Pareto front, but it may lose the diversity. This study reveals some optimization behaviors of using three typical decomposition approaches in the well-known MOEA/D framework for solving MaOPs.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Zhengshan Dong ◽  
Geng Lin ◽  
Niandong Chen

The penalty decomposition method is an effective and versatile method for sparse optimization and has been successfully applied to solve compressed sensing, sparse logistic regression, sparse inverse covariance selection, low rank minimization, image restoration, and so on. With increase in the penalty parameters, a sequence of penalty subproblems required being solved by the penalty decomposition method may be time consuming. In this paper, an acceleration of the penalty decomposition method is proposed for the sparse optimization problem. For each penalty parameter, this method just finds some inexact solutions to those subproblems. Computational experiments on a number of test instances demonstrate the effectiveness and efficiency of the proposed method in accurately generating sparse and redundant representations of one-dimensional random signals.


Author(s):  
Mitsuhiro Nishijima ◽  
Kazuhide Nakata

AbstractThe problem of sensor network localization (SNL) can be formulated as a semidefinite programming problem with a rank constraint. We propose a new method for solving such SNL problems. We factorize a semidefinite matrix with the rank constraint into a product of two matrices via the Burer–Monteiro factorization. Then, we add the difference of the two matrices, with a penalty parameter, to the objective function, thereby reformulating SNL as an unconstrained multiconvex optimization problem, to which we apply the block coordinate descent method. In this paper, we also provide theoretical analyses of the proposed method and show that each subproblem that is solved sequentially by the block coordinate descent method can also be solved analytically, with the sequence generated by our proposed algorithm converging to a stationary point of the objective function. We also give a range of the penalty parameter for which the two matrices used in the factorization agree at any accumulation point. Numerical experiments confirm that the proposed method does inherit the rank constraint and that it estimates sensor positions faster than other methods without sacrificing the estimation accuracy, especially when the measured distances contain errors.


2021 ◽  
Vol 13 (7) ◽  
pp. 1349
Author(s):  
Laleh Ghayour ◽  
Aminreza Neshat ◽  
Sina Paryani ◽  
Himan Shahabi ◽  
Ataollah Shirzadi ◽  
...  

With the development of remote sensing algorithms and increased access to satellite data, generating up-to-date, accurate land use/land cover (LULC) maps has become increasingly feasible for evaluating and managing changes in land cover as created by changes to ecosystem and land use. The main objective of our study is to evaluate the performance of Support Vector Machine (SVM), Artificial Neural Network (ANN), Maximum Likelihood Classification (MLC), Minimum Distance (MD), and Mahalanobis (MH) algorithms and compare them in order to generate a LULC map using data from Sentinel 2 and Landsat 8 satellites. Further, we also investigate the effect of a penalty parameter on SVM results. Our study uses different kernel functions and hidden layers for SVM and ANN algorithms, respectively. We generated the training and validation datasets from Google Earth images and GPS data prior to pre-processing satellite data. In the next phase, we classified the images using training data and algorithms. Ultimately, to evaluate outcomes, we used the validation data to generate a confusion matrix of the classified images. Our results showed that with optimal tuning parameters, the SVM classifier yielded the highest overall accuracy (OA) of 94%, performing better for both satellite data compared to other methods. In addition, for our scenes, Sentinel 2 date was slightly more accurate compared to Landsat 8. The parametric algorithms MD and MLC provided the lowest accuracy of 80.85% and 74.68% for the data from Sentinel 2 and Landsat 8. In contrast, our evaluation using the SVM tuning parameters showed that the linear kernel with the penalty parameter 150 for Sentinel 2 and the penalty parameter 200 for Landsat 8 yielded the highest accuracies. Further, ANN classification showed that increasing the hidden layers drastically reduces classification accuracy for both datasets, reducing zero for three hidden layers.


Sign in / Sign up

Export Citation Format

Share Document