scholarly journals Topology of some tiling spaces without finite local complexity

2009 ◽  
Vol 23 (3) ◽  
pp. 847-865 ◽  
Author(s):  
Natalie Priebe Frank ◽  
◽  
Lorenzo Sadun ◽  
2009 ◽  
Vol 29 (3) ◽  
pp. 997-1031 ◽  
Author(s):  
JEAN SAVINIEN ◽  
JEAN BELLISSARD

AbstractLet 𝒯 be an aperiodic and repetitive tiling of ℝdwith finite local complexity. We present a spectral sequence that converges to theK-theory of 𝒯 with page-2 given by a new cohomology that will be called PV in reference to the Pimsner–Voiculescu exact sequence. It is a generalization of the Serre spectral sequence. The PV cohomology of 𝒯 generalizes the cohomology of the base space of a fibration with local coefficients in theK-theory of its fiber. We prove that it is isomorphic to the Čech cohomology of the hull of 𝒯 (a compactification of the family of its translates).


2021 ◽  
pp. 1-18
Author(s):  
YOTAM SMILANSKY ◽  
YAAR SOLOMON

Abstract We prove that in every compact space of Delone sets in ${\mathbb {R}}^d$ , which is minimal with respect to the action by translations, either all Delone sets are uniformly spread or continuously many distinct bounded displacement equivalence classes are represented, none of which contains a lattice. The implied limits are taken with respect to the Chabauty–Fell topology, which is the natural topology on the space of closed subsets of ${\mathbb {R}}^d$ . This topology coincides with the standard local topology in the finite local complexity setting, and it follows that the dichotomy holds for all minimal spaces of Delone sets associated with well-studied constructions such as cut-and-project sets and substitution tilings, whether or not finite local complexity is assumed.


Fractals ◽  
1994 ◽  
Vol 02 (02) ◽  
pp. 297-301
Author(s):  
B. DUBUC ◽  
S. W. ZUCKER ◽  
M. P. STRYKER

A central issue in characterizing neuronal growth patterns is whether their arbors form clusters. Formal definitions of clusters have been elusive, although intuitively they appear to be related to the complexity of branching. Standard notions of complexity have been developed for point sets, but neurons are specialized "curve-like" objects. Thus we consider the problem of characterizing the local complexity of a "curve-like" measurable set. We propose an index of complexity suitable for defining clusters in such objects, together with an algorithm that produces a complexity map which gives, at each point on the set, precisely this index of complexity. Our index is closely related to the classical notions of fractal dimension, since it consists in determining the rate of growth of the area of a dilated set at a given scale, but it differs in two significant ways. First, the dilation is done normal to the local structure of the set, instead of being done isotropically. Second, the rate of growth of the area of this new set, which we named "normal complexity", is taken at a fixed (given) scale instead instead of around zero. The results will be key in choosing the appropriate representation when integrating local information in low level computer vision. As an application, they lead to the quantification of axonal and dendritic tree growth in neurons.


1995 ◽  
Vol 06 (04) ◽  
pp. 373-399 ◽  
Author(s):  
ANDREAS S. WEIGEND ◽  
MORGAN MANGEAS ◽  
ASHOK N. SRIVASTAVA

In the analysis and prediction of real-world systems, two of the key problems are nonstationarity (often in the form of switching between regimes), and overfitting (particularly serious for noisy processes). This article addresses these problems using gated experts, consisting of a (nonlinear) gating network, and several (also nonlinear) competing experts. Each expert learns to predict the conditional mean, and each expert adapts its width to match the noise level in its regime. The gating network learns to predict the probability of each expert, given the input. This article focuses on the case where the gating network bases its decision on information from the inputs. This can be contrasted to hidden Markov models where the decision is based on the previous state(s) (i.e. on the output of the gating network at the previous time step), as well as to averaging over several predictors. In contrast, gated experts soft-partition the input space, only learning to model their region. This article discusses the underlying statistical assumptions, derives the weight update rules, and compares the performance of gated experts to standard methods on three time series: (1) a computer-generated series, obtained by randomly switching between two nonlinear processes; (2) a time series from the Santa Fe Time Series Competition (the light intensity of a laser in chaotic state); and (3) the daily electricity demand of France, a real-world multivariate problem with structure on several time scales. The main results are: (1) the gating network correctly discovers the different regimes of the process; (2) the widths associated with each expert are important for the segmentation task (and they can be used to characterize the sub-processes); and (3) there is less overfitting compared to single networks (homogeneous multilayer perceptrons), since the experts learn to match their variances to the (local) noise levels. This can be viewed as matching the local complexity of the model to the local complexity of the data.


2019 ◽  
Vol 39 (6) ◽  
pp. 3149-3177 ◽  
Author(s):  
Jeong-Yup Lee ◽  
◽  
Boris Solomyak ◽  
◽  

1991 ◽  
Vol 11 (3) ◽  
pp. 583-602 ◽  
Author(s):  
Y. Yomdin

AbstractWe consider some ways in which regularity of a mapping influences dynamics of its iterations and growth of various complexity-type invariants.


Sign in / Sign up

Export Citation Format

Share Document