Best m-Term Approximation and Sobolev–Besov Spaces of Dominating Mixed Smoothness—the Case of Compact Embeddings

2012 ◽  
Vol 36 (1) ◽  
pp. 1-51 ◽  
Author(s):  
Markus Hansen ◽  
Winfried Sickel
2017 ◽  
Vol 60 (11) ◽  
pp. 2241-2262 ◽  
Author(s):  
Van Kien Nguyen ◽  
Winfried Sickel

2011 ◽  
Vol 18 (3) ◽  
pp. 549-575
Author(s):  
Cornelia Schneider

Abstract First we compute the trace space of Besov spaces – characterized via atomic decompositions – on fractals Γ, for parameters 0 < p < ∞, 0 < q ≤ min(1, p) and s = (n – d)/p. New Besov spaces on fractals are defined via traces for 0 < p, q ≤ ∞, s ≥ (n – d)/p and some embedding assertions are established. We conclude by studying the compactness of the trace operator TrΓ by giving sharp estimates for entropy and approximation numbers of compact embeddings between Besov spaces. Our results on Besov spaces remain valid considering the classical spaces defined via differences. The trace results are used to study traces in Triebel–Lizorkin spaces as well.


2009 ◽  
Vol 16 (4) ◽  
pp. 667-682
Author(s):  
Markus Hansen ◽  
Jan Vybíral

Abstract We give a proof of the Jawerth embedding for function spaces with dominating mixed smoothness of Besov and Triebel–Lizorkin type where 0 < 𝑝0 < 𝑝1 ≤ ∞ and 0 < 𝑞0,𝑞1 ≤ ∞ and with If 𝑝1 < ∞, we prove also the Franke embedding Our main tools are discretization by a wavelet isomorphism and multivariate rearrangements.


2002 ◽  
Vol 9 (3) ◽  
pp. 567-590
Author(s):  
Dachun Yang

Abstract The author first establishes the frame characterizations of Besov and Triebel–Lizorkin spaces on spaces of homogeneous type. As applications, the author then obtains some estimates of entropy numbers for the compact embeddings between Besov spaces or between Triebel–Lizorkin spaces. Moreover, some real interpolation theorems on these spaces are also established by using these frame characterizations and the abstract interpolation method.


Author(s):  
David Krieg ◽  
Mario Ullrich

AbstractWe study the $$L_2$$ L 2 -approximation of functions from a Hilbert space and compare the sampling numbers with the approximation numbers. The sampling number $$e_n$$ e n is the minimal worst-case error that can be achieved with n function values, whereas the approximation number $$a_n$$ a n is the minimal worst-case error that can be achieved with n pieces of arbitrary linear information (like derivatives or Fourier coefficients). We show that $$\begin{aligned} e_n \,\lesssim \, \sqrt{\frac{1}{k_n} \sum _{j\ge k_n} a_j^2}, \end{aligned}$$ e n ≲ 1 k n ∑ j ≥ k n a j 2 , where $$k_n \asymp n/\log (n)$$ k n ≍ n / log ( n ) . This proves that the sampling numbers decay with the same polynomial rate as the approximation numbers and therefore that function values are basically as powerful as arbitrary linear information if the approximation numbers are square-summable. Our result applies, in particular, to Sobolev spaces $$H^s_\mathrm{mix}(\mathbb {T}^d)$$ H mix s ( T d ) with dominating mixed smoothness $$s>1/2$$ s > 1 / 2 and dimension $$d\in \mathbb {N}$$ d ∈ N , and we obtain $$\begin{aligned} e_n \,\lesssim \, n^{-s} \log ^{sd}(n). \end{aligned}$$ e n ≲ n - s log sd ( n ) . For $$d>2s+1$$ d > 2 s + 1 , this improves upon all previous bounds and disproves the prevalent conjecture that Smolyak’s (sparse grid) algorithm is optimal.


Sign in / Sign up

Export Citation Format

Share Document