scholarly journals A Computational Approach to Log-Concave Density Estimation

2015 ◽  
Vol 23 (3) ◽  
pp. 151-166
Author(s):  
Fabian Rathke ◽  
Christoph Schnörr

Abstract Non-parametric density estimation with shape restrictions has witnessed a great deal of attention recently. We consider the maximum-likelihood problem of estimating a log-concave density from a given finite set of empirical data and present a computational approach to the resulting optimization problem. Our approach targets the ability to trade-off computational costs against estimation accuracy in order to alleviate the curse of dimensionality of density estimation in higher dimensions.

2021 ◽  
Vol 11 (2) ◽  
pp. 673
Author(s):  
Guangli Ben ◽  
Xifeng Zheng ◽  
Yongcheng Wang ◽  
Ning Zhang ◽  
Xin Zhang

A local search Maximum Likelihood (ML) parameter estimator for mono-component chirp signal in low Signal-to-Noise Ratio (SNR) conditions is proposed in this paper. The approach combines a deep learning denoising method with a two-step parameter estimator. The denoiser utilizes residual learning assisted Denoising Convolutional Neural Network (DnCNN) to recover the structured signal component, which is used to denoise the original observations. Following the denoising step, we employ a coarse parameter estimator, which is based on the Time-Frequency (TF) distribution, to the denoised signal for approximate estimation of parameters. Then around the coarse results, we do a local search by using the ML technique to achieve fine estimation. Numerical results show that the proposed approach outperforms several methods in terms of parameter estimation accuracy and efficiency.


2009 ◽  
Vol 12 (03) ◽  
pp. 297-317 ◽  
Author(s):  
ANOUAR BEN MABROUK ◽  
HEDI KORTAS ◽  
SAMIR BEN AMMOU

In this paper, fractional integrating dynamics in the return and the volatility series of stock market indices are investigated. The investigation is conducted using wavelet ordinary least squares, wavelet weighted least squares and the approximate Maximum Likelihood estimator. It is shown that the long memory property in stock returns is approximately associated with emerging markets rather than developed ones while strong evidence of long range dependence is found for all volatility series. The relevance of the wavelet-based estimators, especially, the approximate Maximum Likelihood and the weighted least squares techniques is proved in terms of stability and estimation accuracy.


2022 ◽  
Author(s):  
Yun Chen ◽  
Yao Lu ◽  
Xiangyuan Ma ◽  
Yuesheng Xu

Abstract The goal of this study is to develop a new computed tomography (CT) image reconstruction method, aiming at improving the quality of the reconstructed images of existing methods while reducing computational costs. Existing CT reconstruction is modeled by pixel-based piecewise constant approximations of the integral equation that describes the CT projection data acquisition process. Using these approximations imposes a bottleneck model error and results in a discrete system of a large size. We propose to develop a content-adaptive unstructured grid (CAUG) based regularized CT reconstruction method to address these issues. Specifically, we design a CAUG of the image domain to sparsely represent the underlying image, and introduce a CAUG-based piecewise linear approximation of the integral equation by employing a collocation method. We further apply a regularization defined on the CAUG for the resulting illposed linear system, which may lead to a sparse linear representation for the underlying solution. The regularized CT reconstruction is formulated as a convex optimization problem, whose objective function consists of a weighted least square norm based fidelity term, a regularization term and a constraint term. Here, the corresponding weighted matrix is derived from the simultaneous algebraic reconstruction technique (SART). We then develop a SART-type preconditioned fixed-point proximity algorithm to solve the optimization problem. Convergence analysis is provided for the resulting iterative algorithm. Numerical experiments demonstrate the outperformance of the proposed method over several existing methods in terms of both suppressing noise and reducing computational costs. These methods include the SART without regularization and with quadratic regularization on the CAUG, the traditional total variation (TV) regularized reconstruction method and the TV superiorized conjugate gradient method on the pixel grid.


Geophysics ◽  
2020 ◽  
Vol 85 (3) ◽  
pp. R195-R206 ◽  
Author(s):  
Chao Song ◽  
Tariq Alkhalifah

Conventional full-waveform inversion (FWI) aims at retrieving a high-resolution velocity model directly from the wavefields measured at the sensor locations resulting in a highly nonlinear optimization problem. Due to the high nonlinearity of FWI (manifested in one form in the cycle-skipping problem), it is easy to fall into local minima. Considering that the earth is truly anisotropic, a multiparameter inversion imposes additional challenges in exacerbating the null-space problem and the parameter trade-off issue. We have formulated an optimization problem to reconstruct the wavefield in an efficient matter with background models by using an enhanced source function (which includes secondary sources) in combination with fitting the data. In this two-term optimization problem to fit the wavefield to the data and to the background wave equation, the inversion for the wavefield is linear. Because we keep the modeling operator stationary within each frequency, we only need one matrix inversion per frequency. The inversion for the anisotropic parameters is handled in a separate optimization using the wavefield and the enhanced source function. Because the velocity is the dominant parameter controlling the wave propagation, it is updated first. Thus, this reduces undesired updates for anisotropic parameters due to the velocity update leakage. We find the effectiveness of this approach in reducing parameter trade-off with a distinct Gaussian anomaly model. We find that in using the parameterization [Formula: see text], and [Formula: see text] to describe the transversely isotropic media with a vertical axis of symmetry model in the inversion, we end up with high resolution and minimal trade-off compared to conventional parameterizations for the anisotropic Marmousi model. Application on 2D real data also indicates the validity of our method.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Kaikai Yang ◽  
Sheng Hong ◽  
Qi Zhu ◽  
Yanheng Ye

In this paper, we consider the joint angle-range estimation in monostatic FDA-MIMO radar. The transmit subarrays are first utilized to expand the range ambiguity, and the maximum likelihood estimation (MLE) algorithm is first proposed to improve the estimation performance. The range ambiguity is a serious problem in monostatic FDA-MIMO radar, which can reduce the detection range of targets. To extend the unambiguous range, we propose to divide the transmitting array into subarrays. Then, within the unambiguous range, the maximum likelihood (ML) algorithm is proposed to estimate the angle and range with high accuracy and high resolution. In the ML algorithm, the joint angle-range estimation problem becomes a high-dimensional search problem; thus, it is computationally expensive. To reduce the computation load, the alternating projection ML (AP-ML) algorithm is proposed by transforming the high-dimensional search into a series of one-dimensional search iteratively. With the proposed AP-ML algorithm, the angle and range are automatically paired. Simulation results show that transmitting subarray can extend the range ambiguity of monostatic FDA-MIMO radar and obtain a lower cramer-rao low bound (CRLB) for range estimation. Moreover, the proposed AP-ML algorithm is superior over the traditional estimation algorithms in terms of the estimation accuracy and resolution.


Author(s):  
Deepak Sharma ◽  
Kalyanmoy Deb ◽  
N. N. Kishore

In this paper, an improved initial random population strategy using a binary (0–1) representation of continuum structures is developed for evolving the topologies of path generating complaint mechanism. It helps the evolutionary optimization procedure to start with the structures which are free from impracticalities such as ‘checker-board’ pattern and disconnected ‘floating’ material. For generating an improved initial population, intermediate points are created randomly and the support, loading and output regions of a structure are connected through these intermediate points by straight lines. Thereafter, a material is assigned to those grids only where these straight lines pass. In the present study, single and two-objective optimization problems are solved using a local search based evolutionary optimization (NSGA-II) procedure. The single objective optimization problem is formulated by minimizing the weight of structure and a two-objective optimization problem deals with the simultaneous minimization of weight and input energy supplied to the structure. In both cases, an optimization problem is subjected to constraints limiting the allowed deviation at each precision point of a prescribed path so that the task of generating a user-defined path is accomplished and limiting the maximum stress to be within the allowable strength of material. Non-dominated solutions obtained after NSGA-II run are further improved by a local search procedure. Motivation behind the two-objective study is to find the trade-off optimal solutions so that diverse non-dominated topologies of complaint mechanism can be evolved in one run of optimization procedure. The obtained results of two-objective optimization study is compared with an usual study in which material in each grid is assigned at random for creating an initial population of continuum structures. Due to the use of improved initial population, the obtained non-dominated solutions outperform that of the usual study. Different shapes and nature of connectivity of the members of support, loading and output regions of the non-dominated solutions are evolved which will allow the designers to understand the topological changes which made the trade-off and will be helpful in choosing a particular solution for practice.


2014 ◽  
Vol 1662 ◽  
Author(s):  
Reza Lotfi ◽  
Seunghyun Ha ◽  
Josephine V. Carstensen ◽  
James K. Guest

ABSTRACTTopology optimization is a systematic, computational approach to the design of structure, defined as the layout of materials (and pores) across a domain. Typically employed at the component-level scale, topology optimization is increasingly being used to design the architecture of high performance materials. The resulting design problem is posed as an optimization problem with governing unit cell and upscaling mechanics embedded in the formulation, and solved with formal mathematical programming. This paper will describe recent advances in topology optimization, including incorporation of manufacturing processes and objectives governed by nonlinear mechanics and multiple physics, and demonstrate their application to the design of cellular materials. Optimized material architectures are shown to (computationally) approach theoretical bounds when available, and can be used to generate estimations of bounds when such bounds are unknown.


Sign in / Sign up

Export Citation Format

Share Document