On using the Poincaré polynomial for calculating the VC dimension of neural networks

2001 ◽  
Vol 14 (10) ◽  
pp. 1465
Author(s):  
Michael Schmitt
2002 ◽  
Vol 14 (12) ◽  
pp. 2997-3011 ◽  
Author(s):  
Michael Schmitt

We establish versions of Descartes' rule of signs for radial basis function (RBF) neural networks. The RBF rules of signs provide tight bounds for the number of zeros of univariate networks with certain parameter restrictions. Moreover, they can be used to infer that the Vapnik-Chervonenkis (VC) dimension and pseudodimension of these networks are no more than linear. This contrasts with previous work showing that RBF neural networks with two or more input nodes have superlinear VC dimension. The rules also give rise to lower bounds for network sizes, thus demonstrating the relevance of network parameters for the complexity of computing with RBF neural networks.


1996 ◽  
Vol 8 (3) ◽  
pp. 625-628 ◽  
Author(s):  
Peter L. Bartlett ◽  
Robert C. Williamson

We give upper bounds on the Vapnik-Chervonenkis dimension and pseudodimension of two-layer neural networks that use the standard sigmoid function or radial basis function and have inputs from {−D, …,D}n. In Valiant's probably approximately correct (pac) learning framework for pattern classification, and in Haussler's generalization of this framework to nonlinear regression, the results imply that the number of training examples necessary for satisfactory learning performance grows no more rapidly than W log (WD), where W is the number of weights. The previous best bound for these networks was O(W4).


1997 ◽  
Vol 54 (1) ◽  
pp. 190-198 ◽  
Author(s):  
Pascal Koiran ◽  
Eduardo D Sontag
Keyword(s):  

1997 ◽  
Vol 9 (4) ◽  
pp. 765-769 ◽  
Author(s):  
Wee Sun Lee ◽  
Peter L. Bartlett ◽  
Robert C. Williamson

The earlier article gives lower bounds on the VC-dimension of various smoothly parameterized function classes. The results were proved by showing a relationship between the uniqueness of decision boundaries and the VC-dimension of smoothly parameterized function classes. The proof is incorrect; there is no such relationship under the conditions stated in the article. For the case of neural networks with tanh activation functions, we give an alternative proof of a lower bound for the VC-dimension proportional to the number of parameters, which holds even when the magnitude of the parameters is restricted to be arbitrarily small.


1996 ◽  
Vol 8 (6) ◽  
pp. 1277-1299 ◽  
Author(s):  
Arne Hole

We show how lower bounds on the generalization ability of feedforward neural nets with real outputs can be derived within a formalism based directly on the concept of VC dimension and Vapnik's theorem on uniform convergence of estimated probabilities.


Sign in / Sign up

Export Citation Format

Share Document