Hebbian theory is a neuropsychological theory claiming that an increase in synapse efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation of during the learning process. Hebbian theory was introduced by Donald Hebb in his 1949 book The Organization of Behavior. The theory is also called Hebb's rule, Hebb's postulate, and cell assembly theory. Hebb states it as follows:
Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability. ... When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.
The theory is often summarized as " Neurons that fire together, wire together."Siegrid Löwel, Göttingen University; The exact sentence is: "neurons wire together if they fire together" (Löwel, S. and Singer, W. (1992) Science 255 (published January 10, 1992) However, Hebb emphasized that cell A needs to "take part in firing" cell B, and such causality can occur only if cell A fires just before, not at the same time as, cell B. This aspect of causation in Hebb's work foreshadowed what is now known about spike-timing-dependent plasticity, which requires temporal precedence.
Hebbian theory attempts to explain associative or Hebbian learning, in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells. It also provides a biological basis for errorless learning methods for education and memory rehabilitation. In the study of neural networks in cognitive function, it is often regarded as the neuronal basis of unsupervised learning.
The general idea is an old one, that any two cells or systems of cells that are repeatedly active at the same time will tend to become 'associated' so that activity in one facilitates activity in the other.
When one cell repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell.
D. Alan Allport posits additional ideas regarding cell assembly theory and its role in forming engrams using the concept of auto-association, or the brain's ability to retrieve information based on a partial cue, described as follows:
If the inputs to a system cause the same pattern of activity to occur repeatedly, the set of active elements constituting that pattern will become increasingly strongly inter-associated. That is, each element will tend to turn on every other element and (with negative weights) to turn off the elements that do not form part of the pattern. To put it another way, the pattern as a whole will become 'auto-associated'. We may call a learned (auto-associated) pattern an engram.
Research conducted in the laboratory of Nobel laureate Eric Kandel has provided evidence supporting the role of Hebbian learning mechanisms at synapses in the marine Gastropoda Aplysia californica. Because synapses in the peripheral nervous system of marine invertebrates are much easier to control in experiments, Kandel's research found that Hebbian long-term potentiation along with activity-dependent presynaptic facilitation are both necessary for synaptic plasticity and classical conditioning in Aplysia californica.
While research on invertebrates has established fundamental mechanisms of learning and memory, much of the work on long-lasting synaptic changes between vertebrate neurons involves the use of non-physiological experimental stimulation of brain cells. However, some of the physiologically relevant synapse modification mechanisms that have been studied in vertebrate brains do seem to be examples of Hebbian processes. One such review indicates that long-lasting changes in synaptic strengths can be induced by physiologically relevant synaptic activity using both Hebbian and non-Hebbian mechanisms.
The following is a formulaic description of Hebbian learning (many other descriptions are possible):
where is the weight of the connection from neuron to neuron , and is the input for neuron . This is an example of pattern learning, where weights are updated after every training example. In a Hopfield network, connections are set to zero if (no reflexive connections allowed). With binary neurons (activations either 0 or 1), connections would be set to 1 if the connected neurons have the same activation for a pattern.
When several training patterns are used, the expression becomes an average of the individuals:
where is the weight of the connection from neuron to neuron , is the number of training patterns and the -th input for neuron . This is learning by epoch, with weights updated after all the training examples are presented and is last term applicable to both discrete and continuous training sets. Again, in a Hopfield network, connections are set to zero if (no reflexive connections).
A variation of Hebbian learning that takes into account phenomena such as blocking and other neural learning phenomena is the mathematical model of Harry Klopf. Harry Klopf's model assumes that parts of a system with simple adaptive mechanisms can underlie more complex systems with more advanced adaptive behavior, such as neural networks.Klopf, A. H. (1972). Brain function and adaptive systems—A heterostatic theory. Technical Report AFCRL-72-0164, Air Force Cambridge Research Laboratories, Bedford, MA.
This can be mathematically shown in a simplified example. Let us work under the simplifying assumption of a single rate-based neuron of rate , whose inputs have rates . The response of the neuron is usually described as a linear combination of its input, , followed by a response function :
Regardless, even for the unstable solution above, one can see that, when sufficient time has passed, one of the terms dominates over the others, and
This mechanism can be extended to performing a full PCA (principal component analysis) of the input by adding further postsynaptic neurons, provided the postsynaptic neurons are prevented from all picking up the same principal component, for example by adding lateral inhibition in the postsynaptic layer. We have thus connected Hebbian learning to PCA, which is an elementary form of unsupervised learning, in the sense that the network can pick up useful statistical aspects of the input, and "describe" them in a distilled way in its output.
Neuroscientist Christian Keysers and psychologist David Perrett suggested that observing or hearing an individual perform an action activates brain regions as if performing the action oneself. These re-afferent sensory signals trigger activity in neurons responding to the sight, sound, and feel of the action. Because the activity of these sensory neurons will consistently overlap in time with those of the motor neurons that caused the action, Hebbian learning predicts that the synapses connecting neurons responding to the sight, sound, and feel of an action and those of the neurons triggering the action should be potentiated. The same is true while people look at themselves in the mirror, hear themselves babble, or are imitated by others. After repeated occurrences of this re-afference, the synapses connecting the sensory and motor representations of an action are so strong that the motor neurons start firing to the sound or the vision of the action, and a mirror neuron is created.
Numerous experiments provide evidence for the idea that Hebbian learning is crucial to the formation of mirror neurons. Evidence reveals that motor programs can be triggered by novel auditory or visual stimuli after repeated pairing of the stimulus with the execution of the motor program. For instance, people who have never played the piano do not activate brain regions involved in playing the piano when listening to piano music. Five hours of piano lessons, in which the participant is exposed to the sound of the piano each time they press a key is proven sufficient to trigger activity in motor regions of the brain upon listening to piano music when heard at a later time. Consistent with the fact that spike-timing-dependent plasticity occurs only if the presynaptic neuron's firing predicts the post-synaptic neuron's firing, the link between sensory stimuli and motor programs also only seem to be potentiated if the stimulus is contingent on the motor program.
In AI, Hebbian learning has seen applications beyond traditional neural networks. One significant advancement is in reinforcement learning algorithms, where Hebbian-like learning is used to update the weights based on the timing and strength of stimuli during training phases. Some researchers have adapted Hebbian principles to develop more biologically plausible models for learning in artificial systems, which may improve model efficiency and convergence in AI applications. Oja, E. (1982). A simplified neuron model as a principal component analyzer. *Journal of Mathematical Biology*, 15(3), 267–273. Rumelhart, D. E., & McClelland, J. L. (1986). *Parallel Distributed Processing: Explorations in the Microstructure of Cognition*. MIT Press.
A growing area of interest is the application of Hebbian learning in quantum computing. While classical neural networks are the primary area of application for Hebbian theory, recent studies have begun exploring the potential for quantum-inspired algorithms. These algorithms leverage the principles of quantum superposition and entanglement to enhance learning processes in quantum systems.Huang, H., & Li, Y. (2019). A Quantum-Inspired Hebbian Learning Algorithm for Neural Networks. *Journal of Quantum Information Science*, 9(2), 111-124.Current research is exploring how Hebbian principles could inform the development of more efficient quantum machine learning models.
New computational models have emerged that refine or extend Hebbian learning. For example, some models now account for the precise timing of neural spikes (as in spike-timing-dependent plasticity), while others have integrated aspects of neuromodulation to account for how neurotransmitters like dopamine affect the strength of synaptic connections. These advanced models provide a more nuanced understanding of how Hebbian learning operates in the brain and are contributing to the development of more realistic computational models. Miller, P., & Conver, A. (2012). Computational models of synaptic plasticity and learning. *Current Opinion in Neurobiology*, 22(5), 648-655. Béïque, J. C., & Andrade, R. (2012). Neuromodulation of synaptic plasticity in the hippocampus: Implications for learning and memory. *Frontiers in Synaptic Neuroscience*, 4, 15.
Recent research on Hebbian learning has focused on the role of inhibitory neurons, which are often overlooked in traditional Hebbian models. While classic Hebbian theory primarily focuses on excitatory neurons, more comprehensive models of neural learning now consider the balanced interaction between excitatory and inhibitory synapses. Studies suggest that inhibitory neurons can provide critical regulation for maintaining stability in neural circuits and might prevent runaway positive feedback in Hebbian learning.Markram, H., Lübke, J., Frotscher, M., & Sakmann, B. (1997). Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. *Science*, 275(5297), 213–215.Cohen, M. R., & Kohn, A. (2011). Measuring and interpreting neuronal correlations. *Nature Neuroscience*, 14(7), 811-819.
|
|