The Estimation of Nonparametric Functions in a Hilbert Space

1985 ◽  
Vol 1 (1) ◽  
pp. 7-26 ◽  
Author(s):  
A. R. Bergstrom

This paper is concerned with the estimation of a nonlinear regression function which is not assumed to belong to a prespecified parametric family of functions. An orthogonal series estimator is proposed, and Hilbert space methods are used in the derivation of its properties and the proof of several convergence theorems. One of the main objectives of the paper is to provide the theoretical basis for a practical stopping rule which can be used for determining the number of Fourier coefficients to be estimated from a given sample.

Mathematics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 389
Author(s):  
Jeong-Gyoo Kim

Fourier series is a well-established subject and widely applied in various fields. However, there is much less work on double Fourier coefficients in relation to spaces of general double sequences. We understand the space of double Fourier coefficients as an abstract space of sequences and examine relationships to spaces of general double sequences: p-power summable sequences for p = 1, 2, and the Hilbert space of double sequences. Using uniform convergence in the sense of a Cesàro mean, we verify the inclusion relationships between the four spaces of double sequences; they are nested as proper subsets. The completions of two spaces of them are found to be identical and equal to the largest one. We prove that the two-parameter Wiener space is isomorphic to the space of Cesàro means associated with double Fourier coefficients. Furthermore, we establish that the Hilbert space of double sequence is an abstract Wiener space. We think that the relationships of sequence spaces verified at an intermediate stage in this paper will provide a basis for the structures of those spaces and expect to be developed further as in the spaces of single-indexed sequences.


Author(s):  
Fabio Sigrist

AbstractWe introduce a novel boosting algorithm called ‘KTBoost’ which combines kernel boosting and tree boosting. In each boosting iteration, the algorithm adds either a regression tree or reproducing kernel Hilbert space (RKHS) regression function to the ensemble of base learners. Intuitively, the idea is that discontinuous trees and continuous RKHS regression functions complement each other, and that this combination allows for better learning of functions that have parts with varying degrees of regularity such as discontinuities and smooth parts. We empirically show that KTBoost significantly outperforms both tree and kernel boosting in terms of predictive accuracy in a comparison on a wide array of data sets.


2013 ◽  
Vol 2013 ◽  
pp. 1-7
Author(s):  
Bin-Chao Deng ◽  
Tong Chen

LetHbe a real Hilbert space. LetT1,T2:H→Hbek1-,k2-strictly pseudononspreading mappings; letαnandβnbe two real sequences in (0,1). For givenx0∈H, the sequencexnis generated iteratively byxn+1=βnxn+1-βnTw1αnγfxn+I-μαnBTw2xn,∀n∈N, whereTwi=1−wiI+wiTiwithi=1,2andB:H→His strongly monotone and Lipschitzian. Under some mild conditions on parametersαnandβn, we prove that the sequencexnconverges strongly to the setFixT1∩FixT2of fixed points of a pair of strictly pseudononspreading mappingsT1andT2.


2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
C. E. Chidume ◽  
C. O. Chidume ◽  
N. Djitté ◽  
M. S. Minjibir

LetKbe a nonempty, closed, and convex subset of a real Hilbert spaceH. Suppose thatT:K→2Kis a multivalued strictly pseudocontractive mapping such thatF(T)≠∅. A Krasnoselskii-type iteration sequence{xn}is constructed and shown to be an approximate fixed point sequence ofT; that is,limn→∞d(xn,Txn)=0holds. Convergence theorems are also proved under appropriate additional conditions.


2014 ◽  
Vol 47 (2) ◽  
Author(s):  
P. Cholamjiak ◽  
W. Cholamjiak ◽  
S. Suantai

AbstractIn this paper, strong convergence theorems by the viscosity approximation method for nonexpansive multi-valued nonself mappings and equilibrium problems are established under some suitable conditions in a Hilbert space. The obtained results extend and improve the corresponding results existed in the literature.


2017 ◽  
Vol 34 (4) ◽  
pp. 754-789 ◽  
Author(s):  
Chaohua Dong ◽  
Jiti Gao

This paper proposes two simple and new specification tests based on the use of an orthogonal series for a considerable class of bivariate nonlinearly cointegrated time series models with endogeneity and nonstationarity. The first test is proposed for the case where the regression function is integrable, which fills a gap in the literature, and the second test, which nests the first one, deals with regression functions in a quite large function space that is sufficient for both theoretical and practical use. As a starting point of our asymptotic theory, the first test is studied initially and then the theory is extended to the second test. Endogeneity in two general forms is allowed in the models to be tested. The finite sample performance of the tests is examined through several simulated examples. Our experience generally shows that the proposed tests are easily implementable and also have stable sizes and good power properties even when the ‘distance’ between the null hypothesis and a sequence of local alternatives is asymptotically negligible.


Author(s):  
David Krieg ◽  
Mario Ullrich

AbstractWe study the $$L_2$$ L 2 -approximation of functions from a Hilbert space and compare the sampling numbers with the approximation numbers. The sampling number $$e_n$$ e n is the minimal worst-case error that can be achieved with n function values, whereas the approximation number $$a_n$$ a n is the minimal worst-case error that can be achieved with n pieces of arbitrary linear information (like derivatives or Fourier coefficients). We show that $$\begin{aligned} e_n \,\lesssim \, \sqrt{\frac{1}{k_n} \sum _{j\ge k_n} a_j^2}, \end{aligned}$$ e n ≲ 1 k n ∑ j ≥ k n a j 2 , where $$k_n \asymp n/\log (n)$$ k n ≍ n / log ( n ) . This proves that the sampling numbers decay with the same polynomial rate as the approximation numbers and therefore that function values are basically as powerful as arbitrary linear information if the approximation numbers are square-summable. Our result applies, in particular, to Sobolev spaces $$H^s_\mathrm{mix}(\mathbb {T}^d)$$ H mix s ( T d ) with dominating mixed smoothness $$s>1/2$$ s > 1 / 2 and dimension $$d\in \mathbb {N}$$ d ∈ N , and we obtain $$\begin{aligned} e_n \,\lesssim \, n^{-s} \log ^{sd}(n). \end{aligned}$$ e n ≲ n - s log sd ( n ) . For $$d>2s+1$$ d > 2 s + 1 , this improves upon all previous bounds and disproves the prevalent conjecture that Smolyak’s (sparse grid) algorithm is optimal.


2013 ◽  
Vol 29 (1) ◽  
pp. 9-18
Author(s):  
VASILE BERINDE ◽  

The aim of this paper is to prove some convergence theorems for a general fixed point iterative method defined by means of the new concept of admissible perturbation of a nonlinear operator, introduced in [Rus, I. A., An abstract point of view on iterative approximation of fixed points, Fixed Point Theory 13 (2012), No. 1, 179–192]. The obtained convergence theorems extend and unify some fundamental results in the iterative approximation of fixed points due to Petryshyn [Petryshyn, W. V., Construction of fixed points of demicompact mappings in Hilbert space, J. Math. Anal. Appl. 14 (1966), 276–284] and Browder and Petryshyn [Browder, F. E. and Petryshyn, W. V., Construction of fixed points of nonlinear mappings in Hilbert space, J. Math. Anal. Appl. 20 (1967), No. 2, 197–228].


2017 ◽  
Vol 12 (12) ◽  
pp. 6845-6851
Author(s):  
Inaam Mohammed Ali Hadi ◽  
Dr. salwa Salman Abd

In this paper, we give a type of iterative scheme for sequence of nonexpansive mappings and we study the strongly convergence of these schemes in real Hilbert space to common fixed point which is also a solution of a variational inequality. Also there are some consequent of this results in convex analysis


Sign in / Sign up

Export Citation Format

Share Document