Stationary points and performance surfaces of a perceptron learning algorithm for a nonseparable data model

Author(s):  
J.J. Shynk ◽  
N.J. Bershad
1993 ◽  
Vol 2 (4) ◽  
pp. 385-387 ◽  
Author(s):  
Martin Anthony ◽  
John Shawe-Taylor

The perceptron learning algorithm quite naturally yields an algorithm for finding a linearly separable boolean function consistent with a sample of such a function. Using the idea of a specifying sample, we give a simple proof that, in general, this algorithm is not efficient.


1995 ◽  
Vol 43 (7) ◽  
pp. 1696-1702 ◽  
Author(s):  
S.N. Diggavi ◽  
J.J. Shynk ◽  
N.J. Bershad

2018 ◽  
Author(s):  
Toviah Moldwin ◽  
Idan Segev

AbstractThe perceptron learning algorithm and its multiple-layer extension, the backpropagation algorithm, are the foundations of the present-day machine learning revolution. However, these algorithms utilize a highly simplified mathematical abstraction of a neuron; it is not clear to what extent real biophysical neurons with morphologically-extended nonlinear dendritic trees and conductance-based synapses could realize perceptron-like learning. Here we implemented the perceptron learning algorithm in a realistic biophysical model of a layer 5 cortical pyramidal cell. We tested this biophysical perceptron (BP) on a memorization task, where it needs to correctly binarily classify 100, 1000, or 2000 patterns, and a generalization task, where it should discriminate between two “noisy” patterns. We show that the BP performs these tasks with an accuracy comparable to that of the original perceptron, though the memorization capacity of the apical tuft is somewhat limited. We concluded that cortical pyramidal neurons can act as powerful classification devices.


Sign in / Sign up

Export Citation Format

Share Document