random hypergraph
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 9)

H-INDEX

6
(FIVE YEARS 1)

10.37236/9014 ◽  
2021 ◽  
Vol 28 (4) ◽  
Author(s):  
Benjamin Gunby ◽  
Maxwell Fishelson

A classic result of Marcus and Tardos (previously known as the Stanley-Wilf conjecture) bounds from above the number of $n$-permutations ($\sigma \in S_n$) that do not contain a specific sub-permutation. In particular, it states that for any fixed permutation $\pi$, the number of $n$-permutations that avoid $\pi$ is at most exponential in $n$. In this paper, we generalize this result. We bound the number of avoidant $n$-permutations even if they only have to avoid $\pi$ at specific indices. We consider a $k$-uniform hypergraph $\Lambda$ on $n$ vertices and count the $n$-permutations that avoid $\pi$ at the indices corresponding to the edges of $\Lambda$. We analyze both the random and deterministic hypergraph cases. This problem was originally proposed by Asaf Ferber. When $\Lambda$ is a random hypergraph with edge density $\alpha$, we show that the expected number of $\Lambda$-avoiding $n$-permutations is bounded (both upper and lower) as $\exp(O(n))\alpha^{-\frac{n}{k-1}}$, using a supersaturation version of F\"{u}redi-Hajnal. In the deterministic case we show that, for $\Lambda$ containing many size $L$ cliques, the number of $\Lambda$-avoiding $n$-permutations is $O\left(\frac{n\log^{2+\epsilon}n}{L}\right)^n$, giving a nontrivial bound with $L$ polynomial in $n$. Our main tool in the analysis of this deterministic case is the new and revolutionary hypergraph containers method, developed in papers of Balogh-Morris-Samotij and Saxton-Thomason.


2021 ◽  
Vol 11 (9) ◽  
pp. 3867
Author(s):  
Zhewei Liu ◽  
Zijia Zhang ◽  
Yaoming Cai ◽  
Yilin Miao ◽  
Zhikun Chen

Extreme Learning Machine (ELM) is characterized by simplicity, generalization ability, and computational efficiency. However, previous ELMs fail to consider the inherent high-order relationship among data points, resulting in being powerless on structured data and poor robustness on noise data. This paper presents a novel semi-supervised ELM, termed Hypergraph Convolutional ELM (HGCELM), based on using hypergraph convolution to extend ELM into the non-Euclidean domain. The method inherits all the advantages from ELM, and consists of a random hypergraph convolutional layer followed by a hypergraph convolutional regression layer, enabling it to model complex intraclass variations. We show that the traditional ELM is a special case of the HGCELM model in the regular Euclidean domain. Extensive experimental results show that HGCELM remarkably outperforms eight competitive methods on 26 classification benchmarks.


10.37236/8092 ◽  
2019 ◽  
Vol 26 (4) ◽  
Author(s):  
Colin Cooper ◽  
Alan Frieze ◽  
Wesley Pegden

We study the rank of a random $n \times m$ matrix $\mathbf{A}_{n,m;k}$ with entries from $GF(2)$, and exactly $k$ unit entries in each column, the other entries being zero. The columns are chosen independently and uniformly at random from the set of all ${n \choose k}$ such columns. We obtain an asymptotically correct estimate for the rank as a function of the number of columns $m$ in terms of $c,n,k$, and where $m=cn/k$. The matrix $\mathbf{A}_{n,m;k}$ forms the vertex-edge incidence matrix of a $k$-uniform random hypergraph $H$. The rank of $\mathbf{A}_{n,m;k}$ can be expressed as follows. Let $|C_2|$ be the number of vertices of the 2-core of $H$, and $|E(C_2)|$ the number of edges. Let $m^*$ be the value of $m$ for which $|C_2|= |E(C_2)|$. Then w.h.p. for $m<m^*$ the rank of $\mathbf{A}_{n,m;k}$ is asymptotic to $m$, and for $m \ge m^*$ the rank is asymptotic to $m-|E(C_2)|+|C_2|$. In addition, assign i.i.d. $U[0,1]$ weights $X_i, i \in {1,2,...m}$ to the columns, and define the weight of a set of columns $S$ as $X(S)=\sum_{j \in S} X_j$. Define a basis as a set of $n-𝟙 (k\text{ even})$ linearly independent columns. We obtain an asymptotically correct estimate for the minimum weight basis. This generalises the well-known result of Frieze [On the value of a random minimum spanning tree problem, Discrete Applied Mathematics, (1985)] that, for $k=2$,   the expected length of a minimum weight spanning tree tends to $\zeta(3)\sim 1.202$.


2018 ◽  
Vol 73 (4) ◽  
pp. 731-733 ◽  
Author(s):  
D. A. Kravtsov ◽  
N. E. Krokhmal ◽  
D. A. Shabanov
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document