On Learning Perceptrons with Binary Weights

1993 ◽  
Vol 5 (5) ◽  
pp. 767-782 ◽  
Author(s):  
Mostefa Golea ◽  
Mario Marchand

We present an algorithm that PAC learns any perceptron with binary weights and arbitrary threshold under the family of product distributions. The sample complexity of this algorithm is of O[(n/ε)4 ln(n/δ)] and its running time increases only linearly with the number of training examples. The algorithm does not try to find an hypothesis that agrees with all of the training examples; rather, it constructs a binary perceptron based on various probabilistic estimates obtained from the training examples. We show that, under the restricted case of the uniform distribution and zero threshold, the algorithm reduces to the well known clipped Hebb rule. We calculate exactly the average generalization rate (i.e., the learning curve) of the algorithm, under the uniform distribution, in the limit of an infinite number of dimensions. We find that the error rate decreases exponentially as a function of the number of training examples. Hence, the average case analysis gives a sample complexity of O[n ln(1/ε)], a large improvement over the PAC learning analysis. The analytical expression of the learning curve is in excellent agreement with the extensive numerical simulations. In addition, the algorithm is very robust with respect to classification noise.

1994 ◽  
Vol 05 (02) ◽  
pp. 115-122
Author(s):  
MOSTEFA GOLEA

We describe an Hebb-type algorithm for learning unions of nonoverlapping perceptrons with binary weights. Two perceptrons are said to be nonoverlapping if they do not share any input variables. The learning algorithm is able to find both the network architecture and the weight values necessary to represent the target function. Moreover, the algorithm is local, homogeneous, and simple enough to be biologically plausible. We investigate the average behavior of this algorithm as a function of the size of the training set. We find that, as the size of the training set increases, the hypothesis network built by the algorithm “converges” to the target network, both in terms of the number of perceptrons and the connectivity. Moreover, the generalization rate converges exponentially to perfect generalization as a function of the number of training examples. The analytic expressions are in excellent agreement with the numerical simulations. To our knowledge, this is the first average case analysis of an algorithm that finds both the weight values and the network connectivity.


Author(s):  
Yansong Gao ◽  
Jie Zhang

The fundamental assignment problem is in search of welfare maximization mechanisms to allocate items to agents when the private preferences over indivisible items are provided by self-interested agents. The mainstream mechanism \textit{Random Priority} is asymptotically the best mechanism for this purpose, when comparing its welfare  to the optimal social welfare using the canonical \textit{worst-case approximation ratio}.  Surprisingly, the efficiency loss indicated by the worst-case ratio does not have a constant bound \cite{FFZ:14}.Recently, \cite{DBLP:conf/mfcs/DengG017} shows that when the agents' preferences are drawn from a uniform distribution, its \textit{average-case approximation ratio} is upper bounded by 3.718. They left it as an open question of whether a constant ratio holds for general scenarios. In this paper, we offer an affirmative answer to this question by showing that the ratio is bounded by $1/\mu$ when the preference values are independent and identically distributed random variables, where $\mu$ is the expectation of the value distribution. This upper bound improves the results in \cite{DBLP:conf/mfcs/DengG017} for the Uniform distribution as well. Moreover, under mild conditions, the ratio has a \textit{constant} bound for any independent  random values. En route to these results, we develop powerful tools to show the insights that for most valuation inputs, the efficiency loss is small.


2016 ◽  
Vol 27 (02) ◽  
pp. 109-126
Author(s):  
Sven De Felice ◽  
Cyril Nicaud

We analyze the average complexity of Brzozowski's minimization algorithm for distributions of deterministic automata with a small number of final states. We show that, as in the case of the uniform distribution, the average complexity is super-polynomial even if we consider random deterministic automata with only one final state. Such results were only known for distributions where the expected number of final states was linear in the number of states.


Algorithmica ◽  
2006 ◽  
Vol 46 (3-4) ◽  
pp. 469-491 ◽  
Author(s):  
Moritz G. Maass

Sign in / Sign up

Export Citation Format

Share Document