bounded width
Recently Published Documents


TOTAL DOCUMENTS

54
(FIVE YEARS 8)

H-INDEX

14
(FIVE YEARS 1)

2022 ◽  
Vol 69 (1) ◽  
pp. 1-46
Author(s):  
Édouard Bonnet ◽  
Eun Jung Kim ◽  
Stéphan Thomassé ◽  
Rémi Watrigant

Inspired by a width invariant defined on permutations by Guillemot and Marx [SODA’14], we introduce the notion of twin-width on graphs and on matrices. Proper minor-closed classes, bounded rank-width graphs, map graphs, K t -free unit d -dimensional ball graphs, posets with antichains of bounded size, and proper subclasses of dimension-2 posets all have bounded twin-width. On all these classes (except map graphs without geometric embedding) we show how to compute in polynomial time a sequence of d -contractions , witness that the twin-width is at most d . We show that FO model checking, that is deciding if a given first-order formula ϕ evaluates to true for a given binary structure G on a domain D , is FPT in |ϕ| on classes of bounded twin-width, provided the witness is given. More precisely, being given a d -contraction sequence for G , our algorithm runs in time f ( d ,|ϕ |) · |D| where f is a computable but non-elementary function. We also prove that bounded twin-width is preserved under FO interpretations and transductions (allowing operations such as squaring or complementing a graph). This unifies and significantly extends the knowledge on fixed-parameter tractability of FO model checking on non-monotone classes, such as the FPT algorithm on bounded-width posets by Gajarský et al. [FOCS’15].


2021 ◽  
Vol 11 (1) ◽  
pp. 427
Author(s):  
Sunghwan Moon

Deep neural networks have shown very successful performance in a wide range of tasks, but a theory of why they work so well is in the early stage. Recently, the expressive power of neural networks, important for understanding deep learning, has received considerable attention. Classic results, provided by Cybenko, Barron, etc., state that a network with a single hidden layer and suitable activation functions is a universal approximator. A few years ago, one started to study how width affects the expressiveness of neural networks, i.e., a universal approximation theorem for a deep neural network with a Rectified Linear Unit (ReLU) activation function and bounded width. Here, we show how any continuous function on a compact set of Rnin,nin∈N can be approximated by a ReLU network having hidden layers with at most nin+5 nodes in view of an approximate identity.


2020 ◽  
Vol 67 ◽  
pp. 409-436
Author(s):  
Romain Wallon ◽  
Stefan Mengel

We consider bounded width CNF-formulas where the width is measured by popular graph width measures on graphs associated to CNF-formulas. Such restricted graph classes, in particular those of bounded treewidth, have been extensively studied for their uses in the design of algorithms for various computational problems on CNF-formulas. Here we consider the expressivity of these formulas in the model of clausal encodings with auxiliary variables. We first show that bounding the width for many of the measures from the literature leads to a dramatic loss of expressivity, restricting the formulas to such of low communication complexity. We then show that the width of optimal encodings with respect to different measures is strongly linked: there are two classes of width measures, one containing primal treewidth and the other incidence cliquewidth, such that in each class the width of optimal encodings only differs by constant factors. Moreover, between the two classes the width differs at most by a factor logarithmic in the number of variables. Both these results are in stark contrast to the setting without auxiliary variables where all width measures we consider here differ by more than constant factors and in many cases even by linear factors.


Mathematics ◽  
2019 ◽  
Vol 7 (10) ◽  
pp. 992 ◽  
Author(s):  
Boris Hanin

This article concerns the expressive power of depth in neural nets with ReLU activations and a bounded width. We are particularly interested in the following questions: What is the minimal width w min ( d ) so that ReLU nets of width w min ( d ) (and arbitrary depth) can approximate any continuous function on the unit cube [ 0 , 1 ] d arbitrarily well? For ReLU nets near this minimal width, what can one say about the depth necessary to approximate a given function? We obtain an essentially complete answer to these questions for convex functions. Our approach is based on the observation that, due to the convexity of the ReLU activation, ReLU nets are particularly well suited to represent convex functions. In particular, we prove that ReLU nets with width d + 1 can approximate any continuous convex function of d variables arbitrarily well. These results then give quantitative depth estimates for the rate of approximation of any continuous scalar function on the d-dimensional cube [ 0 , 1 ] d by ReLU nets with width d + 3 .


2019 ◽  
Vol 155 (7) ◽  
pp. 1245-1258
Author(s):  
Nir Avni ◽  
Chen Meiri
Keyword(s):  

We prove two results about the width of words in $\operatorname{SL}_{n}(\mathbb{Z})$. The first is that, for every $n\geqslant 3$, there is a constant $C(n)$ such that the width of any word in $\operatorname{SL}_{n}(\mathbb{Z})$ is less than $C(n)$. The second result is that, for any word $w$, if $n$ is big enough, the width of $w$ in $\operatorname{SL}_{n}(\mathbb{Z})$ is at most 87.


Sign in / Sign up

Export Citation Format

Share Document