scholarly journals Sparsity-Aware Noise Subspace Fitting for DOA Estimation

Sensors ◽  
2019 ◽  
Vol 20 (1) ◽  
pp. 81
Author(s):  
Chundi Zheng ◽  
Huihui Chen ◽  
Aiguo Wang

We propose a sparsity-aware noise subspace fitting (SANSF) algorithm for direction-of-arrival (DOA) estimation using an array of sensors. The proposed SANSF algorithm is developed from the optimally weighted noise subspace fitting criterion. Our formulation leads to a convex linearly constrained quadratic programming (LCQP) problem that enjoys global convergence without the need of accurate initialization and can be easily solved by existing LCQP solvers. Combining the weighted quadratic objective function, the ℓ 1 norm, and the non-negative constraints, the proposed SANSF algorithm can enhance the sparsity of the solution. Numerical results based on simulations, using real measured ultrasonic, and radar data, show that, compared to existing sparsity-aware methods, the proposed SANSF can provide enhanced resolution with a lower computational burden.

2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Feng-Gang Yan ◽  
Shuai Liu ◽  
Jun Wang ◽  
Ming Jin

Most popular techniques for super-resolution direction of arrival (DOA) estimation rely on an eigen-decomposition (EVD) or a singular value decomposition (SVD) computation to determine the signal/noise subspace, which is computationally expensive for real-time applications. A two-step root multiple signal classification (TS-root-MUSIC) algorithm is proposed to avoid the complex EVD/SVD computation using a uniform linear array (ULA) based on a mild assumption that the number of signals is less than half that of sensors. The ULA is divided into two subarrays, and three noise-free cross-correlation matrices are constructed using data collected by the two subarrays. A low-complexity linear operation is derived to obtain a rough noise subspace for a first-step DOA estimate. The performance is further enhanced in the second step by using the first-step result to renew the previous estimated noise subspace with a slightly increased complexity. The new technique can provide close root mean square error (RMSE) performance to root-MUSIC with reduced computational burden, which are verified by numerical simulations.


MATEMATIKA ◽  
2020 ◽  
Vol 36 (1) ◽  
pp. 43-49
Author(s):  
T Dwi Ary Widhianingsih ◽  
Heri Kuswanto ◽  
Dedy Dwi Prastyo

Logistic regression is one of the commonly used classification methods. It has some advantages, specifically related to hypothesis testing and its objective function. However, it also has some disadvantages in the case of high-dimensional data, such as multicolinearity, over-fitting, and a high computational burden. Ensemblebased classification methods have been proposed to overcome these problems. The logistic regression ensemble (LORENS) method is expected to improve the classification performance of basic logistic regression. In this paper, we apply it to the case of drug discovery with the objective of obtaining candidate compounds to protect the normal non-cancerous cells, which is considered to be a problem with a data-set of high dimensionality. The experimental results show that it performs well, with an accuracy of 69% and AUC of 0.7306.


Author(s):  
Ashok V. Kumar ◽  
David C. Gossard

Abstract A sequential approximation technique for non-linear programming is presented here that is particularly suited for problems in engineering design and structural optimization, where the number of variables are very large and function and sensitivity evaluations are computationally expensive. A sequence of sub-problems are iteratively generated using a linear approximation for the objective function and setting move limits on the variables using a barrier method. These sub-problems are strictly convex. Computation per iteration is significantly reduced by not solving the sub-problems exactly. Instead at each iteration, a few Newton-steps are taken for the sub-problem. A criteria for moving the move limit, is described that reduces or eliminates stepsize reduction during line search. The method was found to perform well for unconstrained and linearly constrained optimization problems. It requires very few function evaluations, does not require the hessian of the objective function and evaluates its gradient only once per iteration.


Author(s):  
Ion Necoara ◽  
Martin Takáč

Abstract In this paper we consider large-scale smooth optimization problems with multiple linear coupled constraints. Due to the non-separability of the constraints, arbitrary random sketching would not be guaranteed to work. Thus, we first investigate necessary and sufficient conditions for the sketch sampling to have well-defined algorithms. Based on these sampling conditions we develop new sketch descent methods for solving general smooth linearly constrained problems, in particular, random sketch descent (RSD) and accelerated random sketch descent (A-RSD) methods. To our knowledge, this is the first convergence analysis of RSD algorithms for optimization problems with multiple non-separable linear constraints. For the general case, when the objective function is smooth and non-convex, we prove for the non-accelerated variant sublinear rate in expectation for an appropriate optimality measure. In the smooth convex case, we derive for both algorithms, non-accelerated and A-RSD, sublinear convergence rates in the expected values of the objective function. Additionally, if the objective function satisfies a strong convexity type condition, both algorithms converge linearly in expectation. In special cases, where complexity bounds are known for some particular sketching algorithms, such as coordinate descent methods for optimization problems with a single linear coupled constraint, our theory recovers the best known bounds. Finally, we present several numerical examples to illustrate the performances of our new algorithms.


2019 ◽  
Vol 2019 ◽  
pp. 1-9
Author(s):  
Xiaolong Su ◽  
Zhen Liu ◽  
Tianpeng Liu ◽  
Bo Peng ◽  
Xin Chen ◽  
...  

Coherent source localization is a common problem in signal processing. In this paper, a sparse representation method is considered to deal with two-dimensional (2D) direction of arrival (DOA) estimation for coherent sources with a uniform circular array (UCA). Considering that objective function requires sparsity in the spatial dimension but does not require sparsity in time, singular value decomposition (SVD) is employed to reduce computational complexity and ℓ2 norm is utilized to renew objective function. After the new objective function is constructed to evaluate residual and sparsity, a second-order cone (SOC) programming is employed to solve convex optimization problem and obtain 2D spatial spectrum. Simulations show that the proposed method can deal with the case of coherent source localization, which has higher resolution than 2D MUSIC method and does not need to estimate the number of coherent sources in advance.


2019 ◽  
Vol 17 (05) ◽  
pp. 773-818 ◽  
Author(s):  
Yi Xu ◽  
Qihang Lin ◽  
Tianbao Yang

In this paper, a new theory is developed for first-order stochastic convex optimization, showing that the global convergence rate is sufficiently quantified by a local growth rate of the objective function in a neighborhood of the optimal solutions. In particular, if the objective function [Formula: see text] in the [Formula: see text]-sublevel set grows as fast as [Formula: see text], where [Formula: see text] represents the closest optimal solution to [Formula: see text] and [Formula: see text] quantifies the local growth rate, the iteration complexity of first-order stochastic optimization for achieving an [Formula: see text]-optimal solution can be [Formula: see text], which is optimal at most up to a logarithmic factor. To achieve the faster global convergence, we develop two different accelerated stochastic subgradient methods by iteratively solving the original problem approximately in a local region around a historical solution with the size of the local region gradually decreasing as the solution approaches the optimal set. Besides the theoretical improvements, this work also includes new contributions toward making the proposed algorithms practical: (i) we present practical variants of accelerated stochastic subgradient methods that can run without the knowledge of multiplicative growth constant and even the growth rate [Formula: see text]; (ii) we consider a broad family of problems in machine learning to demonstrate that the proposed algorithms enjoy faster convergence than traditional stochastic subgradient method. We also characterize the complexity of the proposed algorithms for ensuring the gradient is small without the smoothness assumption.


Sign in / Sign up

Export Citation Format

Share Document