scholarly journals Penalized wavelet estimation and robust denoising for irregular spaced data

Author(s):  
Umberto Amato ◽  
Anestis Antoniadis ◽  
Italia De Feis ◽  
Irène Gijbels

AbstractNonparametric univariate regression via wavelets is usually implemented under the assumptions of dyadic sample size, equally spaced fixed sample points, and i.i.d. normal errors. In this work, we propose, study and compare some wavelet based nonparametric estimation methods designed to recover a one-dimensional regression function for data that not necessary possess the above requirements. These methods use appropriate regularizations by penalizing the decomposition of the unknown regression function on a wavelet basis of functions evaluated on the sampling design. Exploiting the sparsity of wavelet decompositions for signals belonging to homogeneous Besov spaces, we use some efficient proximal gradient descent algorithms, available in recent literature, for computing the estimates with fast computation times. Our wavelet based procedures, in both the standard and the robust regression case have favorable theoretical properties, thanks in large part to the separability nature of the (non convex) regularization they are based on. We establish asymptotic global optimal rates of convergence under weak conditions. It is known that such rates are, in general, unattainable by smoothing splines or other linear nonparametric smoothers. Lastly, we present several experiments to examine the empirical performance of our procedures and their comparisons with other proposals available in the literature. An interesting regression analysis of some real data applications using these procedures unambiguously demonstrate their effectiveness.

Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


2020 ◽  
Vol 70 (4) ◽  
pp. 953-978
Author(s):  
Mustafa Ç. Korkmaz ◽  
G. G. Hamedani

AbstractThis paper proposes a new extended Lindley distribution, which has a more flexible density and hazard rate shapes than the Lindley and Power Lindley distributions, based on the mixture distribution structure in order to model with new distribution characteristics real data phenomena. Its some distributional properties such as the shapes, moments, quantile function, Bonferonni and Lorenz curves, mean deviations and order statistics have been obtained. Characterizations based on two truncated moments, conditional expectation as well as in terms of the hazard function are presented. Different estimation procedures have been employed to estimate the unknown parameters and their performances are compared via Monte Carlo simulations. The flexibility and importance of the proposed model are illustrated by two real data sets.


1984 ◽  
Vol 16 (3) ◽  
pp. 492-561 ◽  
Author(s):  
E. J. Hannan ◽  
L. Kavalieris

This paper is in three parts. The first deals with the algebraic and topological structure of spaces of rational transfer function linear systems—ARMAX systems, as they have been called. This structure theory is dominated by the concept of a space of systems of order, or McMillan degree, n, because of the fact that this space, M(n), can be realised as a kind of high-dimensional algebraic surface of dimension n(2s + m) where s and m are the numbers of outputs and inputs. In principle, therefore, the fitting of a rational transfer model to data can be considered as the problem of determining n and then the appropriate element of M(n). However, the fact that M(n) appears to need a large number of coordinate neighbourhoods to cover it complicates the task. The problems associated with this program, as well as theory necessary for the analysis of algorithms to carry out aspects of the program, are also discussed in this first part of the paper, Sections 1 and 2.The second part, Sections 3 and 4, deals with algorithms to carry out the fitting of a model and exhibits these algorithms through simulations and the analysis of real data.The third part of the paper discusses the asymptotic properties of the algorithm. These properties depend on uniform rates of convergence being established for covariances up to some lag increasing indefinitely with the length of record, T. The necessary limit theorems and the analysis of the algorithms are given in Section 5. Many of these results are of interest independent of the algorithms being studied.


2012 ◽  
Vol 5 (4) ◽  
pp. 789-808 ◽  
Author(s):  
L. B. Cornman ◽  
R. K. Goodrich ◽  
P. Axelrad ◽  
E. Barlow

Abstract. The increased availability of radio occultation (RO) data offers the ability to detect and study turbulence in the Earth's atmosphere. An analysis of how RO data can be used to determine the strength and location of turbulent regions is presented. This includes the derivation of a model for the power spectrum of the log-amplitude and phase fluctuations of the permittivity (or index of refraction) field. The bulk of the paper is then concerned with the estimation of the model parameters. Parameter estimators are introduced and some of their statistical properties are studied. These estimators are then applied to simulated log-amplitude RO signals. This includes the analysis of global statistics derived from a large number of realizations, as well as case studies that illustrate various specific aspects of the problem. Improvements to the basic estimation methods are discussed, and their beneficial properties are illustrated. The estimation techniques are then applied to real occultation data. Only two cases are presented, but they illustrate some of the salient features inherent in real data.


2018 ◽  
Vol 8 (1) ◽  
pp. 44
Author(s):  
Lutfiah Ismail Al turk

In this paper, a Nonhomogeneous Poisson Process (NHPP) reliability model based on the two-parameter Log-Logistic (LL) distribution is considered. The essential model’s characteristics are derived and represented graphically. The parameters of the model are estimated by the Maximum Likelihood (ML) and Non-linear Least Square (NLS) estimation methods for the case of time domain data. An application to show the flexibility of the considered model are conducted based on five real data sets and using three evaluation criteria. We hope this model will help as an alternative model to other useful reliability models for describing real data in reliability engineering area.


In this paper, we have defined a new two-parameter new Lindley half Cauchy (NLHC) distribution using Lindley-G family of distribution which accommodates increasing, decreasing and a variety of monotone failure rates. The statistical properties of the proposed distribution such as probability density function, cumulative distribution function, quantile, the measure of skewness and kurtosis are presented. We have briefly described the three well-known estimation methods namely maximum likelihood estimators (MLE), least-square (LSE) and Cramer-Von-Mises (CVM) methods. All the computations are performed in R software. By using the maximum likelihood method, we have constructed the asymptotic confidence interval for the model parameters. We verify empirically the potentiality of the new distribution in modeling a real data set.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0244316
Author(s):  
Mukhtar M. Salah ◽  
Essam A. Ahmed ◽  
Ziyad A. Alhussain ◽  
Hanan Haj Ahmed ◽  
M. El-Morshedy ◽  
...  

This paper describes a method for computing estimates for the location parameter μ > 0 and scale parameter λ > 0 with fixed shape parameter α of the alpha power exponential distribution (APED) under type-II hybrid censored (T-IIHC) samples. We compute the maximum likelihood estimations (MLEs) of (μ, λ) by applying the Newton-Raphson method (NRM) and expectation maximization algorithm (EMA). In addition, the estimate hazard functions and reliability are evaluated by applying the invariance property of MLEs. We calculate the Fisher information matrix (FIM) by applying the missing information rule, which is important in finding the asymptotic confidence interval. Finally, the different proposed estimation methods are compared in simulation studies. A simulation example and real data example are analyzed to illustrate our estimation methods.


Mathematics ◽  
2020 ◽  
Vol 8 (10) ◽  
pp. 1648
Author(s):  
Mohamed Aboraya ◽  
Haitham M. Yousof ◽  
G.G. Hamedani ◽  
Mohamed Ibrahim

In this work, we propose and study a new family of discrete distributions. Many useful mathematical properties, such as ordinary moments, moment generating function, cumulant generating function, probability generating function, central moment, and dispersion index are derived. Some special discrete versions are presented. A certain special case is discussed graphically and numerically. The hazard rate function of the new class can be “decreasing”, “upside down”, “increasing”, and “decreasing-constant-increasing (U-shape)”. Some useful characterization results based on the conditional expectation of certain function of the random variable and in terms of the hazard function are derived and presented. Bayesian and non-Bayesian methods of estimation are considered. The Bayesian estimation procedure under the squared error loss function is discussed. Markov chain Monte Carlo simulation studies for comparing non-Bayesian and Bayesian estimations are performed using the Gibbs sampler and Metropolis–Hastings algorithm. Four applications to real data sets are employed for comparing the Bayesian and non-Bayesian methods. The importance and flexibility of the new discrete class is illustrated by means of four real data applications.


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3784 ◽  
Author(s):  
Jameel Malik ◽  
Ahmed Elhayek ◽  
Didier Stricker

Hand shape and pose recovery is essential for many computer vision applications such as animation of a personalized hand mesh in a virtual environment. Although there are many hand pose estimation methods, only a few deep learning based algorithms target 3D hand shape and pose from a single RGB or depth image. Jointly estimating hand shape and pose is very challenging because none of the existing real benchmarks provides ground truth hand shape. For this reason, we propose a novel weakly-supervised approach for 3D hand shape and pose recovery (named WHSP-Net) from a single depth image by learning shapes from unlabeled real data and labeled synthetic data. To this end, we propose a novel framework which consists of three novel components. The first is the Convolutional Neural Network (CNN) based deep network which produces 3D joints positions from learned 3D bone vectors using a new layer. The second is a novel shape decoder that recovers dense 3D hand mesh from sparse joints. The third is a novel depth synthesizer which reconstructs 2D depth image from 3D hand mesh. The whole pipeline is fine-tuned in an end-to-end manner. We demonstrate that our approach recovers reasonable hand shapes from real world datasets as well as from live stream of depth camera in real-time. Our algorithm outperforms state-of-the-art methods that output more than the joint positions and shows competitive performance on 3D pose estimation task.


Stats ◽  
2020 ◽  
Vol 3 (2) ◽  
pp. 120-136
Author(s):  
Ersin Yılmaz ◽  
Syed Ejaz Ahmed ◽  
Dursun Aydın

This paper aims to solve the problem of fitting a nonparametric regression function with right-censored data. In general, issues of censorship in the response variable are solved by synthetic data transformation based on the Kaplan–Meier estimator in the literature. In the context of synthetic data, there have been different studies on the estimation of right-censored nonparametric regression models based on smoothing splines, regression splines, kernel smoothing, local polynomials, and so on. It should be emphasized that synthetic data transformation manipulates the observations because it assigns zero values to censored data points and increases the size of the observations. Thus, an irregularly distributed dataset is obtained. We claim that adaptive spline (A-spline) regression has the potential to deal with this irregular dataset more easily than the smoothing techniques mentioned here, due to the freedom to determine the degree of the spline, as well as the number and location of the knots. The theoretical properties of A-splines with synthetic data are detailed in this paper. Additionally, we support our claim with numerical studies, including a simulation study and a real-world data example.


Sign in / Sign up

Export Citation Format

Share Document