dominating mixed smoothness
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 2)

H-INDEX

6
(FIVE YEARS 0)



2021 ◽  
Vol 15 (3) ◽  
Author(s):  
Felix Hummel

AbstractThe sample paths of white noise are proved to be elements of certain Besov spaces with dominating mixed smoothness. Unlike in isotropic spaces, here the regularity does not get worse with increasing space dimension. Consequently, white noise is actually much smoother than the known sharp regularity results in isotropic spaces suggest. An application of our techniques yields new results for the regularity of solutions of Poisson and heat equation on the half space with boundary noise. The main novelty is the flexible treatment of the interplay between the singularity at the boundary and the smoothness in tangential, normal and time direction.



Author(s):  
David Krieg ◽  
Mario Ullrich

AbstractWe study the $$L_2$$ L 2 -approximation of functions from a Hilbert space and compare the sampling numbers with the approximation numbers. The sampling number $$e_n$$ e n is the minimal worst-case error that can be achieved with n function values, whereas the approximation number $$a_n$$ a n is the minimal worst-case error that can be achieved with n pieces of arbitrary linear information (like derivatives or Fourier coefficients). We show that $$\begin{aligned} e_n \,\lesssim \, \sqrt{\frac{1}{k_n} \sum _{j\ge k_n} a_j^2}, \end{aligned}$$ e n ≲ 1 k n ∑ j ≥ k n a j 2 , where $$k_n \asymp n/\log (n)$$ k n ≍ n / log ( n ) . This proves that the sampling numbers decay with the same polynomial rate as the approximation numbers and therefore that function values are basically as powerful as arbitrary linear information if the approximation numbers are square-summable. Our result applies, in particular, to Sobolev spaces $$H^s_\mathrm{mix}(\mathbb {T}^d)$$ H mix s ( T d ) with dominating mixed smoothness $$s>1/2$$ s > 1 / 2 and dimension $$d\in \mathbb {N}$$ d ∈ N , and we obtain $$\begin{aligned} e_n \,\lesssim \, n^{-s} \log ^{sd}(n). \end{aligned}$$ e n ≲ n - s log sd ( n ) . For $$d>2s+1$$ d > 2 s + 1 , this improves upon all previous bounds and disproves the prevalent conjecture that Smolyak’s (sparse grid) algorithm is optimal.





2017 ◽  
Vol 60 (11) ◽  
pp. 2241-2262 ◽  
Author(s):  
Van Kien Nguyen ◽  
Winfried Sickel




Mathematika ◽  
2017 ◽  
Vol 63 (3) ◽  
pp. 863-894 ◽  
Author(s):  
Josef Dick ◽  
Aicke Hinrichs ◽  
Lev Markhasin ◽  
Friedrich Pillichshammer


Sign in / Sign up

Export Citation Format

Share Document