random graph theory
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 2)

H-INDEX

8
(FIVE YEARS 0)

2020 ◽  
Vol 34 (04) ◽  
pp. 5579-5585
Author(s):  
Dror Salti ◽  
Yakir Berchenko

Random-graphs and statistical inference with missing data are two separate topics that have been widely explored each in its field. In this paper we demonstrate the relationship between these two different topics and take a novel view of the data matrix as a random intersection graph. We use graph properties and theoretical results from random-graph theory, such as connectivity and the emergence of the giant component, to identify two threshold phenomena in statistical inference with missing data: loss of identifiability and slower convergence of algorithms that are pertinent to statistical inference such as expectation-maximization (EM). We provide two examples corresponding to these threshold phenomena and illustrate the theoretical predictions with simulations that are consistent with our reduction.


2018 ◽  
Vol 55 (3) ◽  
pp. 900-919
Author(s):  
A. Garavaglia ◽  
R. van der Hofstad

Abstract Continuous-time branching processes (CTBPs) are powerful tools in random graph theory, but are not appropriate to describe real-world networks since they produce trees rather than (multi)graphs. In this paper we analyze collapsed branching processes (CBPs), obtained by a collapsing procedure on CTBPs, in order to define multigraphs where vertices have fixed out-degree m≥2. A key example consists of preferential attachment models (PAMs), as well as generalized PAMs where vertices are chosen according to their degree and age. We identify the degree distribution of CBPs, showing that it is closely related to the limiting distribution of the CTBP before collapsing. In particular, this is the first time that CTBPs are used to investigate the degree distribution of PAMs beyond the tree setting.


2017 ◽  
Vol 09 (02) ◽  
pp. 1750020
Author(s):  
Haizhong Shi ◽  
Yue Shi

There tend to be no related researches regarding the relationships between graph theory and languages ever since the concept of graph-semigroup was first proposed in 1991. In 2011, after finding out the inner co-relations among digraphs, undirected graphs and languages, we proposed certain concepts including undirected graph language and digraph language; moreover, in 2014, we proposed a broaden concept–(V,R)-language and proved: (1) both undirected graph language and digraph language are (V,R)-languages; (2) both undirected graph language and digraph language are regular languages; (3) natural languages are regular languages. In this paper, we propose a new concept–Random Graph Language and build the relationships between random graph and language, which provides researchers with the possibility to do research about languages by using random graph theory.


Author(s):  
Zhiyuan Guo ◽  
Wenling Wu ◽  
Renzhang Liu ◽  
Liting Zhang

The tweakable Even-Mansour construction generalizes the conventional Even-Mansour scheme through replacing round keys by strings derived from a master key and a tweak. Besides providing plenty of inherent variability, such a design builds a tweakable block cipher from some lower level primitive. In the present paper, we evaluate the multi-key security of TEM-1, one of the most commonly used one-round tweakable Even-Mansour schemes (formally introduced at CRYPTO 2015), which is constructed from a single n-bit permutation P and a function f(k, t) linear in k from some tweak space to {0, 1} n. Based on giant component theorem in random graph theory, we propose a collision-based multi-key attack on TEM-1 in the known-plaintext setting. Furthermore, inspired by the methodology of Fouque et al. presented at ASIACRYPT 2014, we devise a novel way of detecting collisions and eventually obtain a memory-efficient multi-key attack in the adaptive chosen-plaintext setting. As important applications, we utilize our techniques to analyze the authenticated encryption algorithms Minalpher (a second-round candidate of CAESAR) and OPP (proposed at EUROCRYPT 2016) in the multi-key setting. We describe knownplaintext attacks on Minalpher and OPP without nonce misuse, which enable us to recover almost all O(2n/3) independent masks by making O(2n/3) queries per key and costing O(22n/3) memory overall. After defining appropriate iterated functions and accordingly changing the mode of creating chains, we improve the basic blockwiseadaptive chosen-plaintext attack to make it also applicable for the nonce-respecting setting. While our attacks do not contradict the security proofs of Minalpher and OPP in the classical setting, nor pose an immediate threat to their uses, our results demonstrate their security margins in the multi-user setting should be carefully considered. We emphasize this is the very first third-party analysis on Minalpher and OPP.


Author(s):  
Mohammad Javad Shafiee ◽  
Paul Fieguth ◽  
Alexander Wong

Deep neural networks have been shown to outperform conventionalstate-of-the-art approaches in several structured predictionapplications. While high-performance computing devices such asGPUs has made developing very powerful deep neural networkspossible, it is not feasible to run these networks on low-cost, lowpowercomputing devices such as embedded CPUs or even embeddedGPUs. As such, there has been a lot of recent interestto produce efficient deep neural network architectures that can berun on small computing devices. Motivated by this, the idea ofStochasticNets was introduced, where deep neural networks areformed by leveraging random graph theory. It has been shownthat StochasticNet can form new networks with 2X or 3X architecturalefficiency while maintaining modeling accuracy. Motivated bythese promising results, here we investigate the idea of Stochastic-Net in StochasticNet (SiS), where highly-efficient deep neural networkswith Network in Network (NiN) architectures are formed ina stochastic manner. Such networks have an intertwining structurecomposed of convolutional layers and micro neural networksto boost the modeling accuracy. The experimental results showthat SiS can form deep neural networks with NiN architectures thathave 4X greater architectural efficiency with only a 2% dropin accuracy for the CIFAR10 dataset. The results are even morepromising for the SVHN dataset, where SiS formed deep neuralnetworks with NiN architectures that have 11.5X greater architecturalefficiency with only a 1% decrease in modeling accuracy.


2015 ◽  
Vol Vol. 18 no. 1 (Analysis of Algorithms) ◽  
Author(s):  
Rudolf Grübel

The successive discrete structures generated by a sequential algorithm from random input constitute a Markov chain that may exhibit long term dependence on its first few input values. Using examples from random graph theory and search algorithms we show how such persistence of randomness can be detected and quantified with techniques from discrete potential theory. We also show that this approach can be used to obtain strong limit theorems in cases where previously only distributional convergence was known. Comment: Official journal file


Sign in / Sign up

Export Citation Format

Share Document