Learning Mechanisms in Networks of Spiking Neurons
Learning Mechanisms in Networks of Spiking Neurons
net/publication/226415529
CITATIONS READS
24 666
5 authors, including:
All content following this page was uploaded by Qingxiang Wu on 03 June 2014.
Abstract: In spiking neural networks, signals are transferred by action potentials. The
information is encoded in the patterns of neuron activities or spikes. These
features create significant differences between spiking neural networks and
classical neural networks. Since spiking neural networks are based on spiking
neuron models that are very close to the biological neuron model, many of the
principles found in biological neuroscience can be used in the networks. In this
chapter, a number of learning mechanisms for spiking neural networks are
introduced. The learning mechanisms can be applied to explain the behaviours
of networks in the brain, and also can be applied to artificial intelligent
systems to process complex information represented by biological stimuli.
Key words: spiking neural networks, learning; spiking neuron models, spike timing-
dependent plasticity, neuron encoding, co-ordinate transformation.
1. INTRODUCTION
Hodgkin and Huxley [3] performed experiments on the giant axon of the
squid and found three different types of ion current. The equations of
Hodgkin and Huxley describe the electro-physiological properties of the
giant axon of the squid. The basic mechanism of generating action potentials
or spikes is a short influx of sodium ions that is followed by an efflux of
4 Chapter 7
where Cm is the membrane capacity, Isyn the synaptic input current, and Ij is
the current through ion channel j. Three types of channels can be regarded as
an equivalent circuit in Fig. 7-1. The Hodgkin-Huxley model describes three
types of channels. All channels may be characterized by their resistance or,
equivalently, by their conductance. The leakage channel is described by a
voltage-independent conductance gL; the conductance of the other ion
channels is voltage and time dependent. If all channels are open, they
transmit currents with a maximum conductance gNa or gK, respectively.
Normally, some of the channels are blocked. The probability that a channel
is open is described by additional variables m, n, and h. The combined action
of m and h controls the Na+ channels. The K+ gates are controlled by n.
Specifically, Hodgkin and Huxley formulated the three current components
as
∑
j
I j = g Na m3h(v(t ) − ENa ) + g K n 4 (v(t ) − EK ) + g L (v(t ) − EL ) (7.2)
The parameters ENa, EK, and EL are the reversal potentials. Reversal
potentials and conductance are empirical parameters from biological
neurons. For example, a set of typical parameters are shown as follows.
ENa= 50mV; EK=-77mV; EL=-54.4mV; gNa=120mS/cm2; gK=36mS/cm2 ;
gL=0.3ms/cm2. Three gating variables are expressed by the following differential
equations.
Isyn
IC IL IK INa
gL gK gNa v(t)
Cm
EL EK ENa
m = α m (v)(1 − m) − β m (v)m
n = α n (v)(1 − n) − β n (v)n (7.3)
h = α (v)(1 − h) − β (v)h
h h
action potential is generated. Fig. 7-2 shows that a neuron receives spike
trains from three afferent neurons in a receptive field.
Neuron 1
w1 g1s(t)
v(t)
wj gjs(t)
Neuron j
wn gns(t) Neuron i
Neuron n
(7.6)
fire neuron. This conductance-based I&F neuron model is very close to the
Hodgkin-Huxley-model in the NEURON software. The simulation results
for both models are illustrated in Fig. 7-4. and this comparison was
performed in the SenseMaker project [22].
Figure 7-3. I&F neuron response to spike trains with different frequencies
Figure 7-4. Firing properties of a single neuron bombarded by random synaptic inputs. Both
neurons were bombarded by Poisson-distributed random synaptic (AMPA) inputs different
firing rates ( 10Hz –100Hz), with maximal conductance of 100 nS.
Neuron 10
Neuron 20 Neuron 0
or 40
Neuron 30
Figure 7-6. The firing pattern changes of neuron chain represents Φ(t)= t /200ms
Firing time(ms)
Figure 7-7. The firing pattern record for x(t)= 20-10 COS( 2πt/3600)
10 Chapter 7
4. STDP IMPLEMENTATION
Spike
train from
control
neuron
Spike train
from first
layer
neurons
Let the input layer represent variable x and output layer represent
variable y. By using the STDP learning mechanism, the two-layers network
shown in Fig.7-11 can be trained to perform any non-linear function y=f(x).
At the training stage, a training stimulus is required to feed into the output
layer. As shown in Fig.7-11, the training layer can generate the target
stimulus according to f(x) and feed into the output layer. A series of stimuli
is randomly generated and presented to the input layer. At the same time the
training layer applies the series of stimuli to generate target stimuli for the
output layer. After STPD learning, the two-layer network can perform the
function y=f(x) without any training stimuli from the training layer i.e. after
removal of the training stimuli.
For example, an SNN with three 100-neuron layers was trained to
perform y=sin(x). The input layer is set to a circle chain with 100 neurons.
The zero degree corresponds to Neuron 50. The output layer and training
layer are set to 100 neurons respectively. If y is regarded as a one-
dimensional co-ordinate, the origin of the y co-ordinate is set to Neuron 50.
Let y=1 correspond to Neuron 94. Because stimulus is a bell-shaped firing
rate distribution, 6 neurons at the end of the neuron layer are used to deal
with the stimulus. Similarly, let y=-1 correspond to Neuron 6 instead of
Neuron 1. If a stimulus is presented at x, the firing rate distribution of the
bell-shaped stimulus is represented by following express.
2π
cos( (x - x′))
N (7.9)
f x ( x′) = Rmax e δ2
7. Learning Mechanisms in Networks of Spiking Neurons 13
where Rmax is the maximal firing rate, N is the number of neurons in the
layer, x’ is the neuron numbers adjacent to the neuron at x position, and δ is
a constant. If x =0, the centre of stimulus is at Neuron 50. Note that not only
Neuron 50 responds to the stimulus, but also those neurons adjacent to
Neuron 50. This is very different from the values in classical neural
networks or digital numbers in Turing computers. In order to easily generate
the stimulus, the frequency can be transformed to Inter Spike Interval (ISI).
ISI for each neuron in x layer can be represented as follows.
x Input layer STDP Output layer y
180° +1
120° 0.75
60° 0.25
0° 0
-60°
-0.25
-120°
-0.75
-180°
-1
y
Fixed weights
y=f(x) +1
0.75
0.25
0
-0.25
-0.75
-1 Training layer
Figure 7-12. Weight distribution for connections between input and output neurons
14 Chapter 7
1
Tisi ( x ') = round (− log(rand ))+6 (ms) (7.10)
f x ( x ')
90 °
-60 °
y-RF
y x-RF
(a) Weight neuron array to output neuron 1 (b) Weight neuron array to output neuron 13
Figure 7-15. Weight strength distribution for intermediate layer to z neuron layer.
16 Chapter 7
(a)Two input stimuli, upper row for x, lower row for y (b)Output of z neuron layer
When two stimuli are presented at the input neuron layers x and y, the
target stimulus for z=x+y is injected into z layer. The STDP synapses adapt
to the stimuli. After training, the weights between the intermediate layer and
the z layer are adapted to perform z=x+y. In the experiment, neuron layers x,
y and z have 20 neurons respectively. The intermediate layer has 20×20=400
neurons. The weight distributions for Neuron 1 and Neuron 13 in the z layer
are shown in Fig. 7-15. The test results are shown in Fig. 7-16.
The traditional XOR problem and phase encoding scheme are applied to
illustrate STDP learning paradigm in this section. In the phase encoding
scheme spike trains are assumed in the same firing frequency. For different
spike trains, the firing time is at a different phase. For example, suppose that
the period is 10 ms and each phase corresponds to a time interval for 1 ms.
Each period thus contains 10 phases. In order to indicate the periods, sine
curves are plotted in Fig. 7-17. Phases also can be represented in radian or
degree. Firing time at phase 7 stands for logical ‘0’, and firing time at phase
2 stands for logical ‘1’. The logical ‘0’ and ‘1’ are represented by the spike
trains (a) and (b) in Fig. 7-17. The XOR problem can be represented as a set
of training patterns shown in Table 7-2. As it takes time for the action
potential to travel from delay neurons to neuron N1, N2, N3 and N4, the
output spike at phase 3 represents logical ‘0’, and output spike at phase 8
represents logical ‘1’. These patterns are applied to train the spiking neural
network shown in Fig.7-18.
Fig.7-18 shows the spiking neural network for the XOR problem. There
are two inputs and one output in the network. Each input is connected to a
set of neurons with a specific delay synapse. For example, input-1 is
7. Learning Mechanisms in Networks of Spiking Neurons 17
0 7 10 17 20 27 30 37
(a) Suppose that phase 7 (ph7) stands for logical ‘0’
0 2 10 12 20 22 30 32
(b) Suppose that phase 2 (ph2) stands for logical ‘1’
Figure 7-17. Phase encoding spike trains for logical ‘0’ and ‘1’.
N1
Input-1
N2 Output
Phase 9 N3
Phase 0
N4
Input-2
STDP
Phase 9
Input-1 (ph7)
Input-2 (ph7)
Output (ph3)
Input-1 (ph2)
Input-2 (ph2)
Output (ph3)
Input-1 (ph2)
Input-2 (ph7)
Output (ph8)
Input-1 (ph7)
Input-2 (ph2)
Output (ph8)
Figure 7-19. Test results of the spiking neural network for XOR problem
7. Learning Mechanisms in Networks of Spiking Neurons 19
N1, N2, N3, and N4 are four pattern recognition neurons that are fully
connected to all delay neurons with STDP synapses. These connections
ensure that the network can adapt to the training patterns by the STDP rule.
Four pattern recognition neurons are connected to each other with inhibitory
synapses. These inhibitory synapses make a competition mechanism among
the four pattern recognition neurons. Once a neuron fires, the neuron will
inhibit other neurons firing. This makes it possible for one neuron to respond
to one stable input pattern. There are four patterns in the XOR problem. Four
neurons are employed in this layer.
If one wants to train the network to recognize XOR pattern 1 in Table 7-
2, the phase encoding spike train (b) is fed into input-1 and input-2. At the
same time, the target output spike train (ph8) is injected into neuron N1.
After about 150ms for STDP adaptation, the connection weights from N1 to
all delay neurons converge to a stable distribution, and the neuron N1 can
respond to the input pattern. Similarly, neuron N2, N3, and N4 can be
trained to recognize pattern 2, 3, and 4. After this, the network can perform
the XOR function. The test results are shown in Fig. 7-19.
Retinal
neuron layer Vertical line
Training signals
Touch area Horizontal line
y X
(a) Attention at
touch point
L
2
Φ
L1
(b) Touch Y
θ
x
2D intermediate layer
Figure 7-20. A SNN model for 2D co-ordinate transformation. (x, y) is co-ordinate for touch
area. (a) Visual pathway: the retinal neuron layer is represented by 2D layer with 40X40
neurons that are connected to X and Y neuron layer with fixed weights. (b) Haptic pathway:
L1 and L2 are arms. θ and Φ are arm angles represented by a 1D neuron layer respectively.
Each θ neuron is connected to the neurons within a corresponding vertical rectangle in the 2D
intermediate layer. Each Φ neuron is connected to the neurons within a corresponding
horizontal rectangle in the 2D intermediate layer. The neurons in the intermediate layer are
fully connected to the X and Y neuron layers with STDP synapses. These connections are
adapted in response to the attention visual stimulus and haptic stimulus under STDP rules.
7. Learning Mechanisms in Networks of Spiking Neurons 21
If the eyes focus on a point (x, y) at the touch area, a visual stimulus can be
generated and transferred to the X and Y bimodal neuron layers through the
visual pathway. Therefore, the visual signal can be applied to train the SNN
for the haptic pathway. If a finger touches the point (x, y), a haptic stimulus
will trigger (θ, Φ) stimuli corresponding to arm position. The (θ, Φ) stimuli
are transferred to (X, Y) bimodal neuron layers through the haptic pathway.
In this model, the synapse strength for the visual pathway is assumed to be
fixed values. Each neuron in the X layer is connected to retinal neurons with
a vertical line receptive field shown in Fig. 7-20. Each neuron in Y layer is
connected to retinal neurons with a horizontal line receptive field. In this
experiments, Rmax for bell shaped stimuli is set to 80 /s, and δ is set to 0.04,
and 40 neurons are employed to encode the θ and Φ layers respectively.
1600 neurons are employed in the 2D intermediate layer and 80 neurons in
the training layer respectively. 80 neurons are also employed in the X and Y
layers respectively.
After training, the SNN can transform the (θ, Φ) stimuli to output (X, Y)
neuron spike activities. In order to test the SNN, suppose that the forearm
turns around with a speed 40° per second, as shown in Fig. 7-21. The circle
is the track of the finger. The values of (θ, Φ) are applied to generate Poisson
procedure spike trains for θ and Φ layers according to (7.9) and (7.10).
When the finger traces the circumference following the track of the circle,
two stimuli are generated corresponding to (θ, Φ) of the arm. The stimuli are
shown in the left panel in Fig. 7-22. When the two stimuli are input into the
network, the outputs of the (X, Y) neuron layers obtained are displayed in the
right panel of Fig.7-22. The neuron firing-rate at the output layer is a bell-
shape distribution. Transferring these firing rate to single values of X and Y,
we can demonstrate that the SNN is capable of transferring the polar co-
ordinate (θ, Φ) to the Cartesian representation (X, Y) as in the equations.
X= L[cos(θ )+cos(θ+Φ)] (7.11)
Y= L[sin(θ )+sin(θ+Φ)] (7.12)
The spike train raster in the upper-left panel in Fig.7-22 represents the
stimuli corresponding to θ = 180°. The stimuli persists for 8000ms. The
stimuli for the Φ neuron layer is shown in the lower-left panel. The stimuli
with bell-shaped firing rate distribution stays for 200ms in sequent positions
at Φ=0°, 9°, 18°, …360°. The changes of (θ, Φ) correspond to the finger
moving along a circle with radius L. According to (7.11) and (7.12), the
output X = L(-1- cos(Φ)) and Y= - Lsin(Φ). These mathematical results are
consistent with the SNN outputs shown in the right panel.
The results of learning are stored in the weight distribution of the
connections between the 2D intermediate layer and (X, Y) layers. After
learning, the haptic pathway in the SNN can transform the arm position (θ,
Φ) to (X, Y) bimodal neuron layers. Actually, θ and Φ are based on body-
22 Chapter 7
Yneuron 80
Yneuron 76
Y
2L=36
t=6000ms L Φ
L
Xneuron 1 t=8000ms Yneuron 40 θ X Xneuron 80
Xneuron 4 t=0ms Xneuron 40 t=4000m Xneuron 76
4L=72
t=2000ms
Yneuron 4
Yneuron 1
θ=360 X=40
X=0
θ=0° X=-40
T=8000ms T=8000ms
Φ=360 Y=40
Y=0
Φ=0 Y=-40
T=8000ms T=8000ms
Figure 7-22. Co-ordinate transformation from body-centred co-ordinate (θ, Φ) to (X, Y).
7. Learning Mechanisms in Networks of Spiking Neurons 23
7. CONCLUSION
ACKNOWLEDGEMENT
REFERENCES
[1] Maass, W., Schnitger, G., and Songtag, E.: On the computational power of sigmoid versus
Boolean threshold circuits. Proc. of the 32nd Annual IEEE Symposium on Foundations of
Computer Science. (1991)767-776
[2] Maass, W.: Networks of spiking neurons: The third generation of neural network models.
Neural Networks. 10(9): (1997)1659—1671
[3] Hodgkin, A. and Huxley, A.: A quantitative description of membrane current and its
application to conduction and excitation in nerve. Journal of Physiology. (London) Vol.
117, (1952)500-544
[4] Gerstner, W., and Kistler, W.: Spiking Neuron Models. Single Neurons, Populations,
Plasticity. Cambridge University Press, (2002)
[5] Melamed, O., Gerstner, W., Maass, W., Tsodyks, M. and Markram, H.: Coding and
Learning of behavioral sequences, Trends in Neurosciences, Vol.27 (2004)11-14
[6] Theunissen, F.E. and Miller, J.P.: Temporal Encoding in Nervous Systems: A Rigorous
Definition. Journal of Computational Neuroscience. (1995)2:149-162
[7] Bohte, S.M., Kok, J.N. and Poutré, H.L.: SpikeProp: Error-Backpropagation for Networks
of Spiking Neurons. Neurocomputing. 48(1-4) (2002)17-37
[8] Wu, Q.X., McGinnity, T.M., Maguire L.P., Glackin, B. and Belatreche, A.: Supervised
Training of Spiking Neural Networks With Weight Limitation Constraints. Proceedings of
International conference on Brain Inspired Cognitive Systems. University of Stirling,
Scotland, UK, (2004)
[9] Sohn, J.W., Zhang, B.T., and Kaang, B.K.: Temporal Pattern Recognition Using a Spiking
Neural Network with Delays. Proceedings of the International Joint Conference on Neural
Networks (IJCNN'99). vol. 4 (1999)2590-2593
[10] Lysetskiy, M., Ozowski, A., and Zurada, J.M.: Invariant Recognition of Spatio-
Temporal Patterns in The Olfactory System Model, Neural Processing Letters. 15:225 –
234, Kluwer Academic Publishers. Printed in the Netherlands, 2002
[11] Choe, Y. and Miikulainen, R.: Self-organization and segmentation in a laterally
connected orientation map of spiking neurons. Neurocomputing. 21(1998)139-157
[12] Sirosh, J., and Miikkulainen, R.,: Topographic receptive fields and patterned lateral
interaction in a selforganizing model of the primary visual cortex. Neural Computation. 9
(1997) 577-594
[13] Bi, G.Q., and Poo, M.M.: Distributed synaptic modification in neural networks induced
by patterned stimulation. Nature, 401 (1999)792 - 796
[14] Bi, G.Q., Poo, M.M.: Synaptic modifications in cultured hippocampal neurons:
dependence on spike timing, synaptic strength, and postsynaptic cell type. Journal of
Neuroscience. 18 (1998)10464–10472
7. Learning Mechanisms in Networks of Spiking Neurons 25
[15] Bell, C.C., Han, V.Z., Sugavara, Y., and Grant, K.: Synaptic plasticity in the mormyrid
electrosensory lobe. Journal of Experimental Biology. 202(1999)1339–1347
[16] Rossum, M.C.W., Bi, G.Q., and Turrigiano, G.G.: Stable Hebbian Learning from Spike
Timing-Dependent Plasticity. The Journal of Neuroscience. 20(23)(2000)8812–8821
[17] Neuron Software download website: http://neuron.duke.edu/
[18] Wu, Q.X., McGinnity, T.M., Maguire, L.P., Glackin, B. and Belatreche, A.: Learning
under weight constraints in networks of temporal encoding spiking neurons. International
Journal of Neurocomputing. Special issue on Brain Inspired Cognitive Systems. (2006) in
press.
[19] Müller, E.: Simulation of High-Conductance States in Cortical Neural Networks. Masters
thesis, University of Heidelberg, HD-KIP-03-22, (2003)
[20] Koch, C.: Biophysics of Computation: Information Processing in Single Neurons. Oxford
University Press, (1999)
[21] Dayan, P., and Abbott, L.F.: Theoretical Neuroscience: Computational and Mathematical
Modeling of Neural Systems. The MIT Press, Cambridge, Massachusetts, (2001).
[22] SenseMaker Project (IST–2001-34712) funded by the European Union under
the “Information Society Technologies” Programme (2002-2006)
[23] Song, S., Miller, K.D., and Abbott, L.F.: Competitive Hebbian learning though spike-
timing dependent synaptic plasticity. Nature Neuroscience, 3 (2000) 919-926
[24] Song, S. and Abbott, L.F.: Column and Map Development and Cortical Re-Mapping
Through Spike-Timing Dependent Plasticity. Neuron, 32 (2001) 339-350
[25] Deneve S., Latham P. E. and Pouget A.: Efficient computation and cue integration with
noisy population codes, Nature Neuroscience, 4 (2001) 826-831
[26] Marisa T.C., Kennett, S., and Haggard, P.: Persistence of visual-tactile enhancement in
humans. Neuroscience Letters. Elsevier Science Ltd, 354(1)(2004) 22–25
[27] Atkins, J. E., Jacobs, R.A., and Knill, D.C.: Experience-dependent visual cue
recalibration based on discrepancies between visual and haptic percepts. Vision Research.
43(25) (2003) 2603-2613
[28] Spence, C., Pavani, F., and Driver, J.: Crossmodal links between vision and touch in
covert endogenous spatial attention. Journal of Experimental Psychology: Human
Perception and Performance. 26 (2000) 1298–1319
[29] Eimer M., Driver, J.: An event-related brain potential study of crossmodal links in spatial
attention between vision and touch. Psychophysiology. 37 (2000) 697–705
[30] Graziano, M.S.A., and Gross, C.G.: The representation of extrapersonal space: A
possible role for bimodal, visual–tactile neurons, in: M.S. Gazzaniga (Ed.), The Cognitive
Neurosciences, MIT Press, Cambridge, MA, (1994) 1021–1034
[31] Zhou, Y.D., and Fuster, J.M.: Visuo-tactile cross-modal associations in cortical
somatosensory cells. Proc. National Academy of Sciences, USA. 97 (2000) 9777–9782
[32] Meftah, E.M., and Shenasa, J.: Chapman, C.E., Effects of a cross-modal manipulation of
attention on somatosensory cortical neuronal responses to tactile stimuli in the monkey.
Journal of Neurophysiology. 88 (2002) 3133–3149
[33] Kennett, S., Taylor-Clarke, M., and Haggard, P.: Noninformative vision improves the
spatial resolution of touch in humans. Current Biology. 11 (2001) 1188–1191
[34] Johansson, R.S., and Westling, G.: Signals in tactile afferents from the fingers eliciting
adaptive motor-responses during precision grip. Experimental Brain Research. 66 (1987)
141–154
[35] Galati, G., Committeri, G., Sanes J.N., and Pizzamiglio L.: Spatial coding of visual and
somatic sensory information in body-centred coordinates. European Journal of
Neuroscience. Blackwell Publishing. 14(4) (2001) 737-748
26 Chapter 7
[36] Colby, C.L. and Goldberg, M.E.: Space and attention in parietal cortex. Annual Review
of Neuroscience. 22 (1999) 319-349
[37] Gross, C.G., and Graziano, M.S.A.: Multiple representations of space in the brain.
Neuroscientist, 1 (1995) 43-50
[38] Rizzolatti, G., Fogassi, L. and Gallese, V.: Parietal cortex: from sight to action. Current
Opinion in Neurobiology. 7 (1997) 562-567
[39] Taylor-Clarke M., Kennett S., and Haggard P.: Vision modulates somatosensory cortical
processing. Current Biology. 12 (2002) 233–236
[40] Iriki, A., Tanaka, M., and Iwamura, Y.: Attention-induced neuronal activity in the
monkey somatosensory cortex revealed by pupillometrics. Neuroscience Research. 25
(1996) 173–181
[41] Thorpe, S., Delorme A. and Rullen, R.V.: Spike-based strategies for rapid processing.
Neural Networks.14(6-7) (2001)715-725
[42] Wu, Q.X., McGinnity, T.M., Maguire, L.P., Belatreche, A. and Glackin, B.: Adaptive
Co-Ordinate Transformation Based on Spike Timing-Dependent Plasticity Learning
Paradigm. Proceedings of The First International Conference on Natural Computation,
LNCS, 3610 (2005)420-429