scholarly journals $$L_2$$-norm sampling discretization and recovery of functions from RKHS with finite trace

Author(s):  
Moritz Moeller ◽  
Tino Ullrich

AbstractIn this paper we study $$L_2$$ L 2 -norm sampling discretization and sampling recovery of complex-valued functions in RKHS on $$D \subset \mathbb {R}^d$$ D ⊂ R d based on random function samples. We only assume the finite trace of the kernel (Hilbert–Schmidt embedding into $$L_2$$ L 2 ) and provide several concrete estimates with precise constants for the corresponding worst-case errors. In general, our analysis does not need any additional assumptions and also includes the case of non-Mercer kernels and also non-separable RKHS. The fail probability is controlled and decays polynomially in n, the number of samples. Under the mild additional assumption of separability we observe improved rates of convergence related to the decay of the singular values. Our main tool is a spectral norm concentration inequality for infinite complex random matrices with independent rows complementing earlier results by Rudelson, Mendelson, Pajor, Oliveira and Rauhut.

1997 ◽  
Vol 119 (2) ◽  
pp. 243-250 ◽  
Author(s):  
C. R. Knospe ◽  
S. M. Tamer ◽  
S. J. Fedigan

Recent experimental results have demonstrated the effectiveness of adaptive open-loop control algorithms for the suppression of unbalance response on rotors supported in active magnetic hearings. Herein, tools for the analysis of stability and performance robustness of this algorithm with respect to structured uncertainty are derived. The stability and performance robustness analysis problems are shown to be readily solved using a novel application of structured singular values. An example problem is presented which demonstrate the efficacy of this approach in obtaining tight bounds on stability margin and worst case performance.


2018 ◽  
Vol 16 (8) ◽  
Author(s):  
Jestin Nordin ◽  
Andrew Charleson ◽  
Morten Gjerde

This paper discusses the use of tsunami modelling to refine the strategies to be used in coastal architectural and planning design works in effort to minimize future tsunami impacts on the coastal buildings. The ability to recreate the characteristics of the 2004 Sumatran Tsunami waves and their impacts is the reason to use computer simulation as the main tool of this research project. The Cornell Multi-Grid Coupled Tsunami Model (COMCOT) programme has been chosen to generate a series of tsunami events onto a one-kilometre-square area of Kuala Muda (north-west of Peninsular Malaysia) coastal area. COMCOT is expected to help practitioners and researchers make the best possible designs for this tsunami-threatened near-beach area. It has the capability to simulate the entire lifespan of a tsunami inclusive of the characteristics and the behaviour of the waves once it inundates the design area. It creates an opportunity to better understand and evaluate the performance of proposed designs in order to achieve the most tsunami-resistant design. The 2004 Sumatran Tsunami waves are considered the worst case scenario this area will experience. Therefore, the waves generated act upon proposed settlement patterns and buildings which are iteratively modified to achieve minimum tsunami damage. COMCOT outputs are used to propose coastal architectural design strategies for present and future nearbeach area developments, especially in the north-western coast of Malaysia. The final Tsunami Responsive Architecture (TRA) design is intended to be culturally acceptable, and to be extended with or without modification to suit other coastal areas at risk of tsunami.


2020 ◽  
Vol 34 (03) ◽  
pp. 3088-3095
Author(s):  
Shufang Zhu ◽  
Giuseppe De Giacomo ◽  
Geguang Pu ◽  
Moshe Y. Vardi

In synthesis, assumptions are constraints on the environment that rule out certain environment behaviors. A key observation here is that even if we consider systems with LTLƒ goals on finite traces, environment assumptions need to be expressed over infinite traces, since accomplishing the agent goals may require an unbounded number of environment action. To solve synthesis with respect to finite-trace LTLƒ goals under infinite-trace assumptions, we could reduce the problem to LTL synthesis. Unfortunately, while synthesis in LTLƒ and in LTL have the same worst-case complexity (both 2EXPTIME-complete), the algorithms available for LTL synthesis are much more difficult in practice than those for LTLƒ synthesis. In this work we show that in interesting cases we can avoid such a detour to LTL synthesis and keep the simplicity of LTLƒ synthesis. Specifically, we develop a BDD-based fixpoint-based technique for handling basic forms of fairness and of stability assumptions. We show, empirically, that this technique performs much better than standard LTL synthesis.


Author(s):  
Yurii Nesterov

AbstractIn this paper we develop new tensor methods for unconstrained convex optimization, which solve at each iteration an auxiliary problem of minimizing convex multivariate polynomial. We analyze the simplest scheme, based on minimization of a regularized local model of the objective function, and its accelerated version obtained in the framework of estimating sequences. Their rates of convergence are compared with the worst-case lower complexity bounds for corresponding problem classes. Finally, for the third-order methods, we suggest an efficient technique for solving the auxiliary problem, which is based on the recently developed relative smoothness condition (Bauschke et al. in Math Oper Res 42:330–348, 2017; Lu et al. in SIOPT 28(1):333–354, 2018). With this elaboration, the third-order methods become implementable and very fast. The rate of convergence in terms of the function value for the accelerated third-order scheme reaches the level $$O\left( {1 \over k^4}\right) $$O1k4, where k is the number of iterations. This is very close to the lower bound of the order $$O\left( {1 \over k^5}\right) $$O1k5, which is also justified in this paper. At the same time, in many important cases the computational cost of one iteration of this method remains on the level typical for the second-order methods.


2017 ◽  
Vol 28 (03) ◽  
pp. 195-210 ◽  
Author(s):  
Alexandros Palioudakis ◽  
Kai Salomaa ◽  
Selim G. Akl

Many nondeterminism measures for finite automata have been studied in the literature. The tree width of an NFA (nondeterministic finite automaton) counts the number of leaves of computation trees as a function of input length. The trace of an NFA is defined in terms of the largest product of the degrees of nondeterministic choices in computations on inputs of given length. Branching is the corresponding best case measure based on the product of nondeterministic choices in the computation that minimizes this value. We establish upper and lower bounds for the trace of an NFA in terms of its tree width. We give a tight bound for the size blow-up of determinizing an NFA with finite trace. Also we show that the trace of any NFA either is bounded by a constant or grows exponentially.


2021 ◽  
pp. 191-200
Author(s):  
Khoi Minh Huynh ◽  
Wei-Tang Chang ◽  
Sang Hun Chung ◽  
Yong Chen ◽  
Yueh Lee ◽  
...  

2020 ◽  
Vol 26 (6) ◽  
Author(s):  
Felix Krahmer ◽  
Dominik Stöger

AbstractPhase retrieval refers to the problem of reconstructing an unknown vector $$x_0 \in {\mathbb {C}}^n$$ x 0 ∈ C n or $$x_0 \in {\mathbb {R}}^n $$ x 0 ∈ R n from m measurements of the form $$y_i = \big \vert \langle \xi ^{\left( i\right) }, x_0 \rangle \big \vert ^2 $$ y i = | ⟨ ξ i , x 0 ⟩ | 2 , where $$ \left\{ \xi ^{\left( i\right) } \right\} ^m_{i=1} \subset {\mathbb {C}}^m $$ ξ i i = 1 m ⊂ C m are known measurement vectors. While Gaussian measurements allow for recovery of arbitrary signals provided the number of measurements scales at least linearly in the number of dimensions, it has been shown that ambiguities may arise for certain other classes of measurements $$ \left\{ \xi ^{\left( i\right) } \right\} ^{m}_{i=1}$$ ξ i i = 1 m such as Bernoulli measurements or Fourier measurements. In this paper, we will prove that even when a subgaussian vector $$ \xi ^{\left( i\right) } \in {\mathbb {C}}^m $$ ξ i ∈ C m does not fulfill a small-ball probability assumption, the PhaseLift method is still able to reconstruct a large class of signals $$x_0 \in {\mathbb {R}}^n$$ x 0 ∈ R n from the measurements. This extends recent work by Krahmer and Liu from the real-valued to the complex-valued case. However, our proof strategy is quite different and we expect some of the new proof ideas to be useful in several other measurement scenarios as well. We then extend our results $$x_0 \in {\mathbb {C}}^n $$ x 0 ∈ C n up to an additional assumption which, as we show, is necessary.


2001 ◽  
Vol 8 (2) ◽  
pp. 415-426
Author(s):  
Henryk Woźniakowski

Abstract We study tractability in the worst case setting of tensor product linear operators defined over weighted tensor product Hilbert spaces. Tractability means that the minimal number of evaluations needed to reduce the initial error by a factor of ε in the d-dimensional case has a polynomial bound in both ε –1 and d. By one evaluation we mean the computation of an arbitrary continuous linear functional, and the initial error is the norm of the linear operator S d specifying the d-dimensional problem. We prove that nontrivial problems are tractable iff the dimension of the image under S 1 (the one-dimensional version of S d ) of the unweighted part of the Hilbert space is one, and the weights of the Hilbert spaces, as well as the singular values of the linear operator S 1, go to zero polynomially fast with their indices.


Author(s):  
J.D. Geller ◽  
C.R. Herrington

The minimum magnification for which an image can be acquired is determined by the design and implementation of the electron optical column and the scanning and display electronics. It is also a function of the working distance and, possibly, the accelerating voltage. For secondary and backscattered electron images there are usually no other limiting factors. However, for x-ray maps there are further considerations. The energy-dispersive x-ray spectrometers (EDS) have a much larger solid angle of detection that for WDS. They also do not suffer from Bragg’s Law focusing effects which limit the angular range and focusing distance from the diffracting crystal. In practical terms EDS maps can be acquired at the lowest magnification of the SEM, assuming the collimator does not cutoff the x-ray signal. For WDS the focusing properties of the crystal limits the angular range of acceptance of the incident x-radiation. The range is dependent upon the 2d spacing of the crystal, with the acceptance angle increasing with 2d spacing. The natural line width of the x-ray also plays a role. For the metal layered crystals used to diffract soft x-rays, such as Be - O, the minimum magnification is approximately 100X. In the worst case, for the LEF crystal which diffracts Ti - Zn, ˜1000X is the minimum.


2008 ◽  
Author(s):  
Sonia Savelli ◽  
Susan Joslyn ◽  
Limor Nadav-Greenberg ◽  
Queena Chen

Sign in / Sign up

Export Citation Format

Share Document