Function Values Are Enough for $$L_2$$-Approximation
AbstractWe study the $$L_2$$ L 2 -approximation of functions from a Hilbert space and compare the sampling numbers with the approximation numbers. The sampling number $$e_n$$ e n is the minimal worst-case error that can be achieved with n function values, whereas the approximation number $$a_n$$ a n is the minimal worst-case error that can be achieved with n pieces of arbitrary linear information (like derivatives or Fourier coefficients). We show that $$\begin{aligned} e_n \,\lesssim \, \sqrt{\frac{1}{k_n} \sum _{j\ge k_n} a_j^2}, \end{aligned}$$ e n ≲ 1 k n ∑ j ≥ k n a j 2 , where $$k_n \asymp n/\log (n)$$ k n ≍ n / log ( n ) . This proves that the sampling numbers decay with the same polynomial rate as the approximation numbers and therefore that function values are basically as powerful as arbitrary linear information if the approximation numbers are square-summable. Our result applies, in particular, to Sobolev spaces $$H^s_\mathrm{mix}(\mathbb {T}^d)$$ H mix s ( T d ) with dominating mixed smoothness $$s>1/2$$ s > 1 / 2 and dimension $$d\in \mathbb {N}$$ d ∈ N , and we obtain $$\begin{aligned} e_n \,\lesssim \, n^{-s} \log ^{sd}(n). \end{aligned}$$ e n ≲ n - s log sd ( n ) . For $$d>2s+1$$ d > 2 s + 1 , this improves upon all previous bounds and disproves the prevalent conjecture that Smolyak’s (sparse grid) algorithm is optimal.