hebb learning
Recently Published Documents


TOTAL DOCUMENTS

23
(FIVE YEARS 1)

H-INDEX

6
(FIVE YEARS 0)

2021 ◽  
Vol 03 (05) ◽  
pp. 259-265
Author(s):  
Ali Salim Rashid ALGHAFRI

Donald Hebb who is the founder of theory of learning behavior from the perspective of cognitive neuroscience based on the cell assembly. Where he explained the occurrence of learning through the Hebb's synapse, which explained the mechanism of interaction and connection between synaptic cells, and the activity that occurs between them to produce learning and related thinking and memory. Also, how the induce that occurs between the cell assembly in the Hebb's synapse has a significant role to understand learning and associated skills, abilities and behavior. Therefore, this theory in the field of education and psychology was employed in the mechanism of long-term Potentiation (LTP) to interpret the occurrence of learning in terms of cognitive neuroscience. Keywords: Cognitive Neuroscience, Theory, Hebb, Learning, Teaching, Brain, Neuron



2020 ◽  
Vol 10 (7) ◽  
pp. 2516 ◽  
Author(s):  
Fanwei Meng ◽  
Yongbiao Hu ◽  
Pengyu Ma ◽  
Xuping Zhang ◽  
Zhixiong Li

This paper presents a supervised Hebb learning single neuron adaptive proportional-integral-derivative (PID) controller for the power control of a cold milling machine. The proposed controller aims to overcome the deficiency of the current power control algorithm, and to achieve as high an output power as possible for the cold milling machine. The control process and system model are established and presented to provide the insight and guidance to the controller design and analysis. The adaptive PID controller is developed using a supervised Hebb learning single neuron method with detailed algorithm and structure analysis. The field test is performed to validate the proposed single neuron adaptive PID control for the power control. In the test, the 8 cm-depth milling is conducted on a cement concrete pavement in which the cement is not well-distributed. The test results show that when the machine speed is adjusted by the machine itself or manually without the adaptive power control system, the machine is often overloaded or underloaded, and the average work speed is 2.4m/min. However, when the adaptive control system is implemented on the machine, it works very close to its rated work condition during its work process. With the developed controller, the machine work speed is adjusted in time to the load variation and uncertain dynamics. The average machine work speed can reach up to 2.766 m/min, which is 15.25% higher than the wok speed of the machine without an adaptive power control system.



Micromachines ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 84 ◽  
Author(s):  
Yanding Qin ◽  
Heng Duan

This paper presents an adaptive hysteresis compensation approach for a piezoelectric actuator (PEA) using single-neuron adaptive control. For a given desired trajectory, the control input to the PEA is dynamically adjusted by the error between the actual and desired trajectories using Hebb learning rules. A single neuron with self-learning and self-adaptive capabilities is a non-linear processing unit, which is ideal for time-variant systems. Based on the single-neuron control, the compensation of the PEA’s hysteresis can be regarded as a process of transmitting biological neuron information. Through the error information between the actual and desired trajectories, the control input is adjusted via the weight adjustment method of neuron learning. In addition, this paper also integrates the combination of Hebb learning rules and supervised learning as teacher signals, which can quickly respond to control signals. The weights of the single-neuron controller can be constantly adjusted online to improve the control performance of the system. Experimental results show that the proposed single-neuron adaptive hysteresis compensation method can track continuous and discontinuous trajectories well. The single-neuron adaptive controller has better adaptive and self-learning performance against the rate-dependence of the PEA’s hysteresis.



2019 ◽  
Author(s):  
Louisa Bogaerts ◽  
Noam Siegelman ◽  
Tali Ben-Porat ◽  
Ram Frost

The Hebb repetition task, an operationalization of long-term sequence learning through repetition, is the focus of renewed interest, as it is taken to provide a laboratory analogue for naturalistic vocabulary acquisition. Indeed, recent studies have consistently related performance in the Hebb repetition task with a range of linguistic (dis)abilities. However, despite the growing interest in the Hebb repetition effect as a theoretical construct, no previous research has ever tested whether the task used to assess Hebb learning offers a stable and reliable measure of individual performance in sequence learning. Since reliability is a necessary condition to predictive validity, in the present work, we tested whether individual ability in visual verbal Hebb repetition learning displays basic test–retest reliability. In a first experiment, Hebrew–English bilinguals performed two verbal Hebb tasks, one with English and one with Hebrew consonant letters. They were retested on the same Hebb tasks after a period of about 6 months. Overall, serial recall performance proved to be a stable and reliable capacity of an individual. By contrast, the test–retest reliability of individual learning performance in our Hebb task was close to zero. A second experiment with French speakers replicated these results and demonstrated that the concurrent learning of two repeated Hebb sequences within the same task minimally improves the reliability scores. Taken together, our results raise concerns regarding the usefulness of at least some current Hebb learning tasks in predicting linguistic (dis)abilities. The theoretical implications are discussed.



2019 ◽  
Author(s):  
Louisa Bogaerts ◽  
Noam Siegelman ◽  
Tali Ben-Porat ◽  
Ram Frost

The Hebb repetition task, an operationalization of long-term sequence learning through repetition, is the focus of renewed interest, as it is taken to provide a laboratory analogue for naturalistic vocabulary acquisition. Indeed, recent studies have consistently related performance in the Hebb repetition task with a range of linguistic (dis)abilities. However, despite the growing interest in the Hebb repetition effect as a theoretical construct, no previous research has ever tested whether the task used to assess Hebb learning offers a stable and reliable measure of individual performance in sequence learning. Since reliability is a necessary condition to predictive validity, in the present work, we tested whether individual ability in visual verbal Hebb repetition learning displays basic test-retest reliability. In a first experiment, Hebrew-English bilinguals performed two verbal Hebb tasks, one with English and one with Hebrew consonant letters. They were retested on the same Hebb tasks after a period of about 6 months. Overall, serial recall performance proved to be a stable and reliable capacity of an individual. By contrast, the test-retest reliability of individual learning performance in our Hebb task was close to zero. A second experiment with French speakers replicated these results and demonstrated that the concurrent learning of two repeated Hebb sequences within the same task minimally improves the reliability scores. Taken together, our results raise concerns regarding the usefulness of at least some current Hebb learning tasks in predicting linguistic (dis)abilities. The theoretical implications are discussed.



2018 ◽  
Vol 71 (4) ◽  
pp. 892-905 ◽  
Author(s):  
Louisa Bogaerts ◽  
Noam Siegelman ◽  
Tali Ben-Porat ◽  
Ram Frost

The Hebb repetition task, an operationalization of long-term sequence learning through repetition, is the focus of renewed interest, as it is taken to provide a laboratory analogue for naturalistic vocabulary acquisition. Indeed, recent studies have consistently related performance in the Hebb repetition task with a range of linguistic (dis)abilities. However, despite the growing interest in the Hebb repetition effect as a theoretical construct, no previous research has ever tested whether the task used to assess Hebb learning offers a stable and reliable measure of individual performance in sequence learning. Since reliability is a necessary condition to predictive validity, in the present work, we tested whether individual ability in visual verbal Hebb repetition learning displays basic test–retest reliability. In a first experiment, Hebrew–English bilinguals performed two verbal Hebb tasks, one with English and one with Hebrew consonant letters. They were retested on the same Hebb tasks after a period of about 6 months. Overall, serial recall performance proved to be a stable and reliable capacity of an individual. By contrast, the test–retest reliability of individual learning performance in our Hebb task was close to zero. A second experiment with French speakers replicated these results and demonstrated that the concurrent learning of two repeated Hebb sequences within the same task minimally improves the reliability scores. Taken together, our results raise concerns regarding the usefulness of at least some current Hebb learning tasks in predicting linguistic (dis)abilities. The theoretical implications are discussed.





Author(s):  
Jia Liu ◽  
Maoguo Gong ◽  
Qiguang Miao

This paper presents to model the Hebb learning rule and proposes a neuron learning machine (NLM). Hebb learning rule describes the plasticity of the connection between presynaptic and postsynaptic neurons and it is unsupervised itself. It formulates the updating gradient of the connecting weight in artificial neural networks. In this paper, we construct an objective function via modeling the Hebb rule. We make a hypothesis to simplify the model and introduce a correlation based constraint according to the hypothesis and stability of solutions. By analysis from the perspectives of maintaining abstract information and increasing the energy based probability of observed data, we find that this biologically inspired model has the capability of learning useful features. NLM can also be stacked to learn hierarchical features and reformulated into convolutional version to extract features from 2-dimensional data. Experiments on single-layer and deep networks demonstrate the effectiveness of NLM in unsupervised feature learning.



2016 ◽  
Vol 28 (S1) ◽  
pp. 245-257 ◽  
Author(s):  
Yunfei Yin ◽  
Hailong Yuan ◽  
Beilei Zhang


2015 ◽  
Vol 152 ◽  
pp. 27-35 ◽  
Author(s):  
Eduard Kuriscak ◽  
Petr Marsalek ◽  
Julius Stroffek ◽  
Peter G. Toth


Sign in / Sign up

Export Citation Format

Share Document