scholarly journals Words have bounded width in

2019 ◽  
Vol 155 (7) ◽  
pp. 1245-1258
Author(s):  
Nir Avni ◽  
Chen Meiri
Keyword(s):  

We prove two results about the width of words in $\operatorname{SL}_{n}(\mathbb{Z})$. The first is that, for every $n\geqslant 3$, there is a constant $C(n)$ such that the width of any word in $\operatorname{SL}_{n}(\mathbb{Z})$ is less than $C(n)$. The second result is that, for any word $w$, if $n$ is big enough, the width of $w$ in $\operatorname{SL}_{n}(\mathbb{Z})$ is at most 87.

2022 ◽  
Vol 69 (1) ◽  
pp. 1-46
Author(s):  
Édouard Bonnet ◽  
Eun Jung Kim ◽  
Stéphan Thomassé ◽  
Rémi Watrigant

Inspired by a width invariant defined on permutations by Guillemot and Marx [SODA’14], we introduce the notion of twin-width on graphs and on matrices. Proper minor-closed classes, bounded rank-width graphs, map graphs, K t -free unit d -dimensional ball graphs, posets with antichains of bounded size, and proper subclasses of dimension-2 posets all have bounded twin-width. On all these classes (except map graphs without geometric embedding) we show how to compute in polynomial time a sequence of d -contractions , witness that the twin-width is at most d . We show that FO model checking, that is deciding if a given first-order formula ϕ evaluates to true for a given binary structure G on a domain D , is FPT in |ϕ| on classes of bounded twin-width, provided the witness is given. More precisely, being given a d -contraction sequence for G , our algorithm runs in time f ( d ,|ϕ |) · |D| where f is a computable but non-elementary function. We also prove that bounded twin-width is preserved under FO interpretations and transductions (allowing operations such as squaring or complementing a graph). This unifies and significantly extends the knowledge on fixed-parameter tractability of FO model checking on non-monotone classes, such as the FPT algorithm on bounded-width posets by Gajarský et al. [FOCS’15].


Author(s):  
Rémy Belmonte ◽  
Fedor V. Fomin ◽  
Petr A. Golovach ◽  
M. S. Ramanujan

Mathematics ◽  
2019 ◽  
Vol 7 (10) ◽  
pp. 992 ◽  
Author(s):  
Boris Hanin

This article concerns the expressive power of depth in neural nets with ReLU activations and a bounded width. We are particularly interested in the following questions: What is the minimal width w min ( d ) so that ReLU nets of width w min ( d ) (and arbitrary depth) can approximate any continuous function on the unit cube [ 0 , 1 ] d arbitrarily well? For ReLU nets near this minimal width, what can one say about the depth necessary to approximate a given function? We obtain an essentially complete answer to these questions for convex functions. Our approach is based on the observation that, due to the convexity of the ReLU activation, ReLU nets are particularly well suited to represent convex functions. In particular, we prove that ReLU nets with width d + 1 can approximate any continuous convex function of d variables arbitrarily well. These results then give quantitative depth estimates for the rate of approximation of any continuous scalar function on the d-dimensional cube [ 0 , 1 ] d by ReLU nets with width d + 3 .


Sign in / Sign up

Export Citation Format

Share Document