surrogate function
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 11)

H-INDEX

5
(FIVE YEARS 1)

2020 ◽  
pp. 002029402096423
Author(s):  
Xu Zhang ◽  
Xu Zhang ◽  
He Wang ◽  
Pengyu Guo ◽  
Bing Xiao

A Laplace ℓ1 robust student’s T-filter is presented for satellites to estimate their attitude state despite severe measurement noise and modeling error. Although the student’s T-filter (STF) has the capability of handling measurement noise, it cannot address the unknown modeling error. It is further sensitive to the degree of freedom (DOF). Hence, the measurement covariance is updated by using the maximum correntropy criterion to accommodate the covariance of unknown modeling error in robust filtering design. Moreover, the Laplace distribution is derived to reduce the influence of the DOF parameter by forming a surrogate function of an optimization problem. Then, the majorization minimization approach is formulated to solve such an optimization problem and present the proposed filtering algorithm in the STF framework. Numerical simulation of applying the proposed attitude estimation scheme is performed by comparing it with the third/fifth-order cubature Kalman filters.


2020 ◽  
Author(s):  
Tomohiro Harada ◽  
Misaki Kaidan ◽  
Ruck Thawonmas

Abstract This paper investigates the integration of a surrogate-assisted multi-objective evolutionary algorithm (MOEA) and a parallel computation scheme to reduce the computing time until obtaining the optimal solutions in evolutionary algorithms (EAs). A surrogate-assisted MOEA solves multi-objective optimization problems while estimating the evaluation of solutions with a surrogate function. A surrogate function is produced by a machine learning model. This paper uses an extreme learning surrogate-assisted MOEA/D (ELMOEA/D), which utilizes one of the well-known MOEA algorithms, MOEA/D, and a machine learning technique, extreme learning machine (ELM). A parallelization of MOEA, on the other hand, evaluates solutions in parallel on multiple computing nodes to accelerate the optimization process. We consider a synchronous and an asynchronous parallel MOEA as a master-slave parallelization scheme for ELMOEA/D. We carry out an experiment with multi-objective optimization problems to compare the synchronous parallel ELMOEA/D with the asynchronous parallel ELMOEA/D. In the experiment, we simulate two settings of the evaluation time of solutions. One determines the evaluation time of solutions by the normal distribution with different variances. On the other hand, another evaluation time correlates to the objective function value. We compare the quality of solutions obtained by the parallel ELMOEA/D variants within a particular computing time. The experimental results show that the parallelization of ELMOEA/D significantly reduces the computational time. In addition, the integration of ELMOEA/D with the asynchronous parallelization scheme obtains higher quality of solutions quicker than the synchronous parallel ELMOEA/D.


2020 ◽  
Author(s):  
Yunyi Li ◽  
Li Liu ◽  
Yu Zhao ◽  
Xiefeng Cheng ◽  
Guan Gui

Group sparse representation (GSR) based method has led to great successes in various image recovery tasks, which can be converted into a low-rank matrix minimization problem. As a widely used surrogate function of low-rank, the nuclear norm based convex surrogate usually leads to over-shrinking problem, since the standard soft-thresholding operator shrinks all singular values equally. To improve traditional sparse representation based image compressive sensing (CS) performance, we propose a generalized CS framework based on GSR model, leading to a nonconvex nonsmooth low-rank minimization problem. The popular -norm and M-estimator are employed for standard image CS and robust CS problem to fit the data respectively. For the better approximation of the rank of group-matrix, a family of nuclear norms are employed to address the over-shrinking problem. Moreover, we also propose a flexible and effective iteratively-weighting strategy to control the weighting and contribution of each singular value. Then we develop an iteratively reweighted nuclear norm algorithm for our generalized framework via an alternating direction method of multipliers framework, namely, GSR-ADMM-IRNN. Experimental results demonstrate that our proposed CS framework can achieve favorable reconstruction performance compared with current state-of-the-art methods and the RCS framework can suppress the outliers effectively.


2020 ◽  
Author(s):  
Yunyi Li ◽  
Li Liu ◽  
Yu Zhao ◽  
Xiefeng Cheng ◽  
Guan Gui

Group sparse representation (GSR) based method has led to great successes in various image recovery tasks, which can be converted into a low-rank matrix minimization problem. As a widely used surrogate function of low-rank, the nuclear norm based convex surrogate usually leads to over-shrinking problem, since the standard soft-thresholding operator shrinks all singular values equally. To improve traditional sparse representation based image compressive sensing (CS) performance, we propose a generalized CS framework based on GSR model, leading to a nonconvex nonsmooth low-rank minimization problem. The popular -norm and M-estimator are employed for standard image CS and robust CS problem to fit the data respectively. For the better approximation of the rank of group-matrix, a family of nuclear norms are employed to address the over-shrinking problem. Moreover, we also propose a flexible and effective iteratively-weighting strategy to control the weighting and contribution of each singular value. Then we develop an iteratively reweighted nuclear norm algorithm for our generalized framework via an alternating direction method of multipliers framework, namely, GSR-ADMM-IRNN. Experimental results demonstrate that our proposed CS framework can achieve favorable reconstruction performance compared with current state-of-the-art methods and the RCS framework can suppress the outliers effectively.


Optimization ◽  
2019 ◽  
Vol 69 (5) ◽  
pp. 1117-1149
Author(s):  
Zisheng Liu ◽  
Jicheng Li ◽  
Xuenian Liu

2019 ◽  
Author(s):  
Andrey Babkin

Matrix Factorization is a widely used technique for modeling pairwise and matrix-like data. It is frequently used in pattern recognition, topic analysis and other areas. Side information is often available, however utilization of this additional information is problematic in the pure matrix factorization framework. This article proposes a novel method of utilizing side information by combining arbitrary nonlinear Quantile Regression model and Matrix Factorization under Bayesian framework. Gradient-free optimization procedure with the novel Surrogate Function is used to solve the resulting MAP estimator. The model performance has been evaluated on real data-sets.


Author(s):  
Hu Zhang ◽  
Pan Zhou ◽  
Yi Yang ◽  
Jiashi Feng

Majorization-Minimization (MM) algorithms optimize an objective function by iteratively minimizing its majorizing surrogate and offer attractively fast convergence rate for convex problems. However, their convergence behaviors for non-convex problems remain unclear. In this paper, we propose a novel MM surrogate function from strictly upper bounding the objective to bounding the objective in expectation. With this generalized surrogate conception, we develop a new optimization algorithm, termed SPI-MM, that leverages the recent proposed SPIDER for more efficient non-convex optimization. We prove that for finite-sum problems, the SPI-MM algorithm converges to an stationary point within deterministic and lower stochastic gradient complexity. To our best knowledge, this work gives the first non-asymptotic convergence analysis for MM-alike algorithms in general non-convex optimization. Extensive empirical studies on non-convex logistic regression and sparse PCA demonstrate the advantageous efficiency of the proposed algorithm and validate our theoretical results.


2019 ◽  
Vol 2019 ◽  
pp. 1-13
Author(s):  
Ming-Ming Liu ◽  
Chun-Xi Dong ◽  
Yang-Yang Dong ◽  
Guo-Qing Zhao

This paper proposes a superresolution two-dimensional (2D) direction of arrival (DOA) estimation algorithm for a rectangular array based on the optimization of the atomic l0 norm and a series of relaxation formulations. The atomic l0 norm of the array response describes the minimum number of sources, which is derived from the atomic norm minimization (ANM) problem. However, the resolution is restricted and high computational complexity is incurred by using ANM for 2D angle estimation. Although an improved algorithm named decoupled atomic norm minimization (DAM) has a reduced computational burden, the resolution is still relatively low in terms of angle estimation. To overcome these limitations, we propose the direct minimization of the atomic l0 norm, which is demonstrated to be equivalent to a decoupled rank optimization problem in the positive semidefinite (PSD) form. Our goal is to solve this rank minimization problem and recover two decoupled Toeplitz matrices in which the azimuth-elevation angles of interest are encoded. Since rank minimization is an NP-hard problem, a novel sparse surrogate function is further proposed to effectively approximate the two decoupled rank functions. Then, the new optimization problem obtained through the above relaxation can be implemented via the majorization-minimization (MM) method. The proposed algorithm offers greatly improved resolution while maintaining the same computational complexity as the DAM algorithm. Moreover, it is possible to use a single snapshot for angle estimation without prior information on the number of sources, and the algorithm is robust to noise due to its iterative nature. In addition, the proposed surrogate function can achieve local convergence faster than existing functions.


Sign in / Sign up

Export Citation Format

Share Document