A Factorization of Least-Squares Projection Schemes for Ill-Posed Problems

2020 ◽  
Vol 20 (4) ◽  
pp. 783-798
Author(s):  
Shukai Du ◽  
Nailin Du

AbstractWe give a factorization formula to least-squares projection schemes, from which new convergence conditions together with formulas estimating the rate of convergence can be derived. We prove that the convergence of the method (including the rate of convergence) can be completely determined by the principal angles between {T^{\dagger}T(X_{n})} and {T^{*}T(X_{n})}, and the principal angles between {X_{n}\cap(\mathcal{N}(T)\cap X_{n})^{\perp}} and {(\mathcal{N}(T)+X_{n})\cap\mathcal{N}(T)^{\perp}}. At the end, we consider several specific cases and examples to further illustrate our theorems.

2009 ◽  
Vol 25 (6) ◽  
pp. 1682-1715 ◽  
Author(s):  
Peter C.B. Phillips ◽  
Tassos Magdalinos

It is well known that unit root limit distributions are sensitive to initial conditions in the distant past. If the distant past initialization is extended to the infinite past, the initial condition dominates the limit theory, producing a faster rate of convergence, a limiting Cauchy distribution for the least squares coefficient, and a limit normal distribution for the t-ratio. This amounts to the tail of the unit root process wagging the dog of the unit root limit theory. These simple results apply in the case of a univariate autoregression with no intercept. The limit theory for vector unit root regression and cointegrating regression is affected but is no longer dominated by infinite past initializations. The latter contribute to the limiting distribution of the least squares estimator and produce a singularity in the limit theory, but do not change the principal rate of convergence. Usual cointegrating regression theory and inference continue to hold in spite of the degeneracy in the limit theory and are therefore robust to initial conditions that extend to the infinite past.


1993 ◽  
Vol 9 (4) ◽  
pp. 633-648 ◽  
Author(s):  
Charles E. Bates ◽  
Halbert White

We give a straightforward condition sufficient for determining the minimum asymptotic variance estimator in certain classes of estimators relevant to econometrics. These classes are relatively broad, as they include extremum estimation with smooth or nonsmooth objective functions; also, the rate of convergence to the asymptotic distribution is not required to be n−½. We present examples illustrating the content of our result. In particular, we apply our result to a class of weighted Huber estimators, and obtain, among other things, analogs of the generalized least-squares estimator for least Lp-estimation, 1 ≤ p < ∞.


2016 ◽  
Vol 16 (2) ◽  
pp. 257-276 ◽  
Author(s):  
Stefan Kindermann

AbstractWe consider the discretization of least-squares problems for linear ill-posed operator equations in Hilbert spaces. The main subject of this article concerns conditions for convergence of the associated discretized minimum-norm least-squares solution to the exact solution using exact attainable data. The two cases of global convergence (convergence for all exact solutions) or local convergence (convergence for a specific exact solution) are investigated. We review the existing results and prove new equivalent conditions when the discretized solution always converges to the exact solution. An important tool is to recognize the discrete solution operator as an oblique projection. Hence, global convergence can be characterized by certain subspaces having uniformly bounded angles. We furthermore derive practically useful conditions when this holds and put them into the context of known results. For local convergence, we generalize results on the characterization of weak or strong convergence and state some new sufficient conditions. We furthermore provide an example of a bounded sequence of discretized solutions which does not converge at all, not even weakly.


Sign in / Sign up

Export Citation Format

Share Document