Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
13 views27 pages

Learning Mechanisms in Networks of Spiking Neurons

This chapter discusses learning mechanisms in spiking neural networks (SNNs), highlighting their differences from classical neural networks due to the encoding of information through patterns of neuron activity. It introduces various learning methods applicable to both biological neural networks and artificial intelligence systems, emphasizing the significance of spiking neuron models like Hodgkin-Huxley and integrate-and-fire models. The chapter also explores how these mechanisms can simulate biological processes and enhance understanding of complex behaviors in the brain.

Uploaded by

Artun Kara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views27 pages

Learning Mechanisms in Networks of Spiking Neurons

This chapter discusses learning mechanisms in spiking neural networks (SNNs), highlighting their differences from classical neural networks due to the encoding of information through patterns of neuron activity. It introduces various learning methods applicable to both biological neural networks and artificial intelligence systems, emphasizing the significance of spiking neuron models like Hodgkin-Huxley and integrate-and-fire models. The chapter also explores how these mechanisms can simulate biological processes and enhance understanding of complex behaviors in the brain.

Uploaded by

Artun Kara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/226415529

Learning Mechanisms in Networks of Spiking Neurons

Chapter · November 2006


DOI: 10.1007/978-3-540-36122-0_7

CITATIONS READS

24 666

5 authors, including:

Qingxiang Wu T.M. Mcginnity


Fujian Normal University Nottingham Trent University
143 PUBLICATIONS 1,916 CITATIONS 387 PUBLICATIONS 7,578 CITATIONS

SEE PROFILE SEE PROFILE

Liam P. Maguire Ammar Belatreche


Ulster University Northumbria University
249 PUBLICATIONS 4,864 CITATIONS 93 PUBLICATIONS 2,562 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Qingxiang Wu on 03 June 2014.

The user has requested enhancement of the downloaded file.


Chapter 7

LEARNING MECHANISMS IN NETWORKS OF


SPIKING NEURONS

QingXiang Wu12, Martin McGinnity1, Liam Maguire1, Brendan Glackin1,


Ammar Belatreche1
1
School of Computing and Intelligent Systems,University of Ulster, Magee Camlus, Derry,
BT48 7JL, N. Ireland, UK; 2School of Physics and OptoElectronics Technology, Fujian
Normal University, Fuzhou, 350007 China.

Abstract: In spiking neural networks, signals are transferred by action potentials. The
information is encoded in the patterns of neuron activities or spikes. These
features create significant differences between spiking neural networks and
classical neural networks. Since spiking neural networks are based on spiking
neuron models that are very close to the biological neuron model, many of the
principles found in biological neuroscience can be used in the networks. In this
chapter, a number of learning mechanisms for spiking neural networks are
introduced. The learning mechanisms can be applied to explain the behaviours
of networks in the brain, and also can be applied to artificial intelligent
systems to process complex information represented by biological stimuli.

Key words: spiking neural networks, learning; spiking neuron models, spike timing-
dependent plasticity, neuron encoding, co-ordinate transformation.

1. INTRODUCTION

The first generation of neural networks is based on the model of


McCulloch-Pitts neurons, as computational units in which the perceptrons
are regard as threshold-gates. A characteristic feature is that such systems
have digital output for every unit. For example, multiplayer perceptrons,
Hopfied nets, and Boltzmann machines are based on this model. The second
generation is based on computational units in which an “activation function”
2 Chapter 7

with a continuous set of possible output values is applied to a weighted sum


of the inputs. Common activation functions are sigmoid functions and linear
saturated functions. The piecewise polynomial functions and piecewise
exponential functions are also considered as activation functions, for
example feed forward and recurrent neural networks, and radial basis
networks. These networks can compute certain Boolean functions with fewer
gates than first generation networks [1], and are able to compute functions
with analog input and output. These two generations of neural networks
focus on a small number of aspects of biological neurons. The third
generation [2] of neural networks is based on the Hodgkin-Huxley spiking
neuron model [3], [4]. The functionalities of the spiking neurons can be
applied to deal with biological stimuli and explain complicated intelligent
behaviours of the brain. A distinct feature of spiking neural networks is that
significant information is encoded in the neural activity patterns and the
neurons communicated using spike trains [5], [6] instead of single values, as
used in the first two-generations of neural networks. Spiking neural networks
always work with a large population of neurons. As a large-scale network of
spiking neurons requires high computational resources to simulate, the
integrate-and-fire neuron model and spike response model [4] are usually
regarded as a simplified Hodgkin-Huxley model. Since spiking neuron
models are employed and information is encoded using the patterns of neural
activities, learning mechanisms for spiking neural networks are very
different from that in the first two-generations of classical neural networks.
Initially, researchers tried to apply traditional learning mechanisms to
spiking neural networks. SpikeProp [7], which is similar to the classical BP
algorithm, has been proposed to train spiking neural networks. The neuron
model employed in the SpikeProp network is based on a spike response
model and assumes that each neuron only fires once during a period. This
work proves that networks of spiking neurons are able to be trained to
perform classification and function approximation. Using parallel
calculations, the network can be trained by fewer epochs than a classical
neural networks for the same classification problem [7], [8]. Based on a
spike response neuron model with delay encoding, a spiking neural network
[9] is applied to a time-series prediction problem-laser amplitude fluctuation.
In the spiking neural network, a delay is defined as the time difference
between the presynaptic firing time and the time when the postsynaptic
potential starts rising. Learning is the process of modifying the delay
according to the time difference between presynaptic neuron firing time and
the postsynaptic neuron firing time, so that the input time structure is
memorized into the delay. In [10], a model of a network of integrate-and-fire
neurons with time delay weights is presented. The model consists of one
layer of multiple leaky integrate-and-fire neurons fully connected with a set
7. Learning Mechanisms in Networks of Spiking Neurons 3

of temporal inputs. These inputs simulate spatiotemporal patterns formed in


the olfactory bulb, and the neural layer corresponds to the olfactory cortex
that receives and recognizes those patterns. The periodic inputs are
expressed by a Dirac delta function. The phase shifts of the input spikes
encode concentrations of the corresponding constituent molecules. The total
time delay of an input signal that arrives at an output neuron is equal to the
sum of the phase shift and the additional time delays stored in the synaptic
connections. The Hopfield’s phase shift encoding principle at the output
level is applied for spatiotemporal pattern recognition. Firing of an output
neuron indicates that corresponding odour is recognized and phase shift of
its firing encodes the concentration of the recognized odour. The learning
mechanism is to update the delays and weights [10]. The result shows that
the approach is capable of invariant spatiotemporal pattern recognition. The
temporal structure of the model provides the base for the modeling of
higher-level tasks, where temporal correlation is involved, such as feature
binding and segmentation, object recognition, etc.
The networks of spiking neurons are capable of self-organization in
different ways. A model of this type of network was applied in the pattern
interaction and orientation maps in the primary visual cortex [11], [12].
Spiking neurons with leaky integrator synapses were used to model image
segmentation and binding by synchronization and desynchronization of
neuronal group activity. The advantage is that the network can model self-
organization and functional dynamics of the visual cortex at a more accurate
level than earlier models.
Since spiking neuron models are very close to biological neurons, many
findings in neuroscience can be simulated using spiking neural networks.
Based on spike timing dependent plasticity (STDP) found in biological
neurons [13], [14], [15], [16], a set of learning mechanisms are demonstrated
in this chapter.

2. SPIKING NEURON MODELS

2.1 Hodgkin-Huxley Spiking Neuron Model

Hodgkin and Huxley [3] performed experiments on the giant axon of the
squid and found three different types of ion current. The equations of
Hodgkin and Huxley describe the electro-physiological properties of the
giant axon of the squid. The basic mechanism of generating action potentials
or spikes is a short influx of sodium ions that is followed by an efflux of
4 Chapter 7

potassium ions. Let v represent the membrane potential of a neuron. The


basic equation of spiking neuron models is given by
dv(t )
cm = I C = I syn (t ) − ∑ I j (t ) (7.1)
dt j

where Cm is the membrane capacity, Isyn the synaptic input current, and Ij is
the current through ion channel j. Three types of channels can be regarded as
an equivalent circuit in Fig. 7-1. The Hodgkin-Huxley model describes three
types of channels. All channels may be characterized by their resistance or,
equivalently, by their conductance. The leakage channel is described by a
voltage-independent conductance gL; the conductance of the other ion
channels is voltage and time dependent. If all channels are open, they
transmit currents with a maximum conductance gNa or gK, respectively.
Normally, some of the channels are blocked. The probability that a channel
is open is described by additional variables m, n, and h. The combined action
of m and h controls the Na+ channels. The K+ gates are controlled by n.
Specifically, Hodgkin and Huxley formulated the three current components
as


j
I j = g Na m3h(v(t ) − ENa ) + g K n 4 (v(t ) − EK ) + g L (v(t ) − EL ) (7.2)

The parameters ENa, EK, and EL are the reversal potentials. Reversal
potentials and conductance are empirical parameters from biological
neurons. For example, a set of typical parameters are shown as follows.
ENa= 50mV; EK=-77mV; EL=-54.4mV; gNa=120mS/cm2; gK=36mS/cm2 ;
gL=0.3ms/cm2. Three gating variables are expressed by the following differential
equations.

Isyn
IC IL IK INa

gL gK gNa v(t)
Cm

EL EK ENa

Figure 7-1. Equivalent circuit for the Hodgkin-Huxley neuron model


7. Learning Mechanisms in Networks of Spiking Neurons 5

m = α m (v)(1 − m) − β m (v)m
n = α n (v)(1 − n) − β n (v)n (7.3)
h = α (v)(1 − h) − β (v)h
h h

Where αx (v) and βx (v) for x ∈ {m, n, h} are dependent on membrane


potential v. The relationships are shown in Table 7-1.

Table 7-1. Parameters for channel control equations


x αx (v) βx (v)
m (0.1v+8.5)/[exp(0.1v+8.5)-1] 4 exp[(65-v)/18]
n (0.75-0.01v)/[exp(7.5-0.1v)-1] 0.125 exp[(65-v)/80]
h 0.07 exp[(65-v)/20] 1/[exp(9.5-0.1v)+1]

The single neuron model was implemented in the NEURON spiking


neural network simulation package [17]. The synapse current is not always a
constant. Different synapse models were used to model synapse current such
as a square pulse, exponential pulse, alpha function, etc.

2.2 Integrate-and-Fire Neuron Model

As mentioned in Section 2.1, the Hodgkin-Huxley spiking neuron is


governed by differential equations (7.1), (7.2), and (7.3). If this model is
applied to a large scale network, the implementation will encounter a very
high computational complexity. Therefore, a set of simplified models were
proposed. For example, the NEURON software provides three types of
integrated-and-fire neuron models, i.e. IntFire1, IntFire2 and IntFire4 [17],
[41]. A spiking response model with temporal encoding was used in [7],
[18]. In this chapter, the conductance-based integrate-and-fire model is used
for each neuron in SNNs because the behaviour of this neuron model is very
close to the Hodgkin-Huxley model [19]. In the model, the membrane
potential v(t) is governed by the following equations [4], [19], [20], [21].
w j g sj (t )

dv(t )
cm = gl ( El − v(t )) + ( Es − v(t )) (7.4)
dt j As

where cm is the specific membrane capacitance, El is the membrane reversal


potential, Es is the reversal potential (s∈{i,e}, i and e indicate inhibitory and
excitatory synapses respectively), wj is a weight for synapse j, and As is the
membrane surface area connected to a synapse. If the membrane potential v
exceeds the threshold voltage vth, v is reset to vreset for a time τref and an
6 Chapter 7

action potential is generated. Fig. 7-2 shows that a neuron receives spike
trains from three afferent neurons in a receptive field.

Neuron 1
w1 g1s(t)
v(t)
wj gjs(t)
Neuron j

wn gns(t) Neuron i

Neuron n

Figure 7-2. Conductance based synapses in a SNN

The variable gjs(t) is the conductance of synapse j. When an action


potential reaches the synapse at tap, the conductance is increased by the
following expression.
(7.5)

Otherwise, the conductance decays as illustrated in the following


equation.

(7.6)

where qs is the peak conductance. Neuron i integrates the currents from


afferent synapses and increases the membrane potential according to
Equation (7.4). In this simulation, the parameters are set as follows. tjdelay=0.
vth =-54 mv. vreset =-70 mv. Ee= 0 mv. Ei=-75 mv. qe_max=0.01 μs. qi_max=0.01
μs. qe=0.002 μs. qi=0.002 μs. El=-70 mv. gl =1.0 μs/mm2. cm=10 nF/mm2.
τe=3 ms. τi=10 ms. Ae=0.028953 mm2. Ai=0.014103 mm2.
In order to show action potential or spikes generated by a single
Integrate-and-Fire (I&F) neuron, 50 excitatory synapses are connected to the
neuron. The mean frequency of 50 random spike trains is increasing slowly
from 0 to 100 Hz. The output spikes of the spiking neuron changes from
non-firing to firing at a fixed frequency. The neuron passed through three
stages, as shown in Fig. 7-3. When the input spike trains are at a low firing
frequency, the neuron do not fire (see Fig.7-3(a)). The membrane potential
of the neuron varies under a threshold. When the input spike trains are strong
enough, the neuron enters into an irregular firing state (Fig.3(b)). When the
input spike trains are very strong, the neuron fires at a fixed frequency
(Fig.7-3(c)). This frequency depends on the refractory time τref of the
neuron. This is a simplest example for spike generation for an integrate-and-
7. Learning Mechanisms in Networks of Spiking Neurons 7

fire neuron. This conductance-based I&F neuron model is very close to the
Hodgkin-Huxley-model in the NEURON software. The simulation results
for both models are illustrated in Fig. 7-4. and this comparison was
performed in the SenseMaker project [22].

(a) Non-firing (b) Irregular-firing (c) Firing at a fixed frequency

Figure 7-3. I&F neuron response to spike trains with different frequencies

Figure 7-4. Firing properties of a single neuron bombarded by random synaptic inputs. Both
neurons were bombarded by Poisson-distributed random synaptic (AMPA) inputs different
firing rates ( 10Hz –100Hz), with maximal conductance of 100 nS.

3. INFORMATION ENCODING IN SNN

Although a neuron transfers information to another neuron by means of a


complicated biological process, experiments show that the action potentials
or spikes [3] are the key signals. Spiking neural networks in the brain are
very complicated. Thousands of spike trains are emitted constantly by
different neurons. How to understand such a spatiotemporal pattern of spikes
8 Chapter 7

is an extremely important topic in spiking neural networks. Therefore, a


wide range of different encoding schemes have been discussed in the domain
of neural coding [4], [6]. For example, count code, binary code, timing code
and rank order code were described in [6]. Firing frequency and firing rate
were described in [4]. The differences between rate encoding scheme and
temporal encoding scheme was discussed in [6]. Here, a specific
spatiotemporal encoding scheme is used. Let a circle of chain neurons shown
in Fig.7-5 represent an angular variable. If Neuron No.0 or No.40 fires at
the highest firing rate and firing rates for neurons from No.38 to No. 2 draws
a bell-shaped distribution, this pattern of the neuron activity indicates 0°.
Suppose that after 200ms the centre of the pattern moves to Neuron 1. The
corresponding angle is 360°/40 = 9°. By analogy, the centre of the pattern
moves from Neuron 2 to 39 step by step with step duration 200ms. The
corresponding angle can be represented by the equation Φd(t)= 9t/200
degree, where the unit of t is ms. If the angle is represented by the centre
neuron number in the bell-shaped distribution of firing rates, the equation is
written as Φ(t) = t/200, where Φ(t) unit is the neuron number. Recording all
the activities of the neuron chain for 8000ms, a firing rate raster is plotted in
Fig. 7-6. Similarly, variable x can be represented by a neuron chain. The
firing pattern for x(t)= 20-10 COS( 2πt/3600) is shown in Fig. 7-7. The
phase encoding scheme is also used in this chapter. Details will be given in
Section 5.

Neuron 10

Neuron 20 Neuron 0
or 40

Neuron 30

Figure 7-5. Angular variable can be represented by a circle of neuron chain


7. Learning Mechanisms in Networks of Spiking Neurons 9

Firing time (ms)

Figure 7-6. The firing pattern changes of neuron chain represents Φ(t)= t /200ms

Firing time(ms)

Figure 7-7. The firing pattern record for x(t)= 20-10 COS( 2πt/3600)
10 Chapter 7

4. STDP IMPLEMENTATION

Changes in the synaptic connections between neurons are widely


believed to contribute to memory storage. These changes are thought to
occur through correlation-based, Hebbian plasticity [16]. Spike Timing-
Dependent Plasticity (STDP) was found in biological neurons. The synaptic
plasticity model has been explored based on the fact that a synaptic
potentiation and depression can be induced by precisely timed pairs of
synaptic events and postsynaptic spikes [13], [14], [15].
In order to perform STDP learning in SNNs, the implementation
approach in [23], [24] is applied. Each synapse in an SNN is characterized
by a peak conductance qs (the peak value of the synaptic conductance
following a single presynaptic action potential) that is constrained to lie
between 0 and a maximum value qs_max. Every pair of pre- and postsynaptic
spikes can potentially modify the value of qs, and the changes due to each
spike pair are continually summed to determine how qs changes over time.
The simplifying assumption is that the modifications are produced by linear
combination of individual spike pairs.
A presynaptic spike occurring at time tpre and a postsynaptic spike at time
tpost modify the corresponding synaptic conductance by
qs ← qs + qs _ max F (Δt ) (7.7)

where Δt = tpost - tpre and


⎧ A exp(Δt / τ + ), if Δt > 0
F (Δt ) = ⎨ + (7.8)
⎩− A− exp(Δt / τ − ), if Δt ≤ 0

The time constants τ+ and τ- determine the ranges of pre- to postsynaptic


spike intervals over which synaptic strengthening and weakening are
significant, and A+ and A_ determine the maximum amount of synaptic
modification in each case. The function F(Δt ) for synaptic modification is
shown in Fig. 7-8.

Figure 7-8. Synaptic modification


7. Learning Mechanisms in Networks of Spiking Neurons 11

The experimental results indicate a value of τ+ in the range of tens of


milliseconds (about 20 ms). The parameters for STDP are set as follows.
qs_max = 0.01, A+ = 0.01, A- = 0.005, τ+=20 ms, τ-=100 ms.

4.1 Connection Selectivity of Two-layer Network


Simulations

Based on the implementation approaches[23], [24], a two layer spiking


neural network with STDP connections is designed. The architecture is
shown in Fig.7-9.
High-level control neuron

Sensory neurons Spiking neurons

Connections determined by STDP


Figure 7-9. The architecture of two-layer network

Spike
train from
control
neuron

Spike train
from first
layer
neurons

Figure 7-10. Synchronized signals selected by STDP learning


12 Chapter 7

The first layer consists of sensory neurons that transform stimulus


strength to phase encoding and output fixed frequency spike trains. The
second layer contains spiking neurons that are connected to the first layer by
a one-to-one configuration; the efficacy of these connections are determined
by STDP learning. A high-level control neuron is fully connected to the
second layer. Suppose that three different stimuli are presented to the
neurons in first layer. One of the stimuli is also presented to the high-level
control neuron. After STDP learning, the firing neurons are only those
neurons that receive the same stimulus as the control neuron. STDP can
increase the efficacy of these connections between neurons with
synchronous signals, and decrease the weights of connections between
neurons with asynchronous signals. The simulation results are shown in
Fig.7-10. This two-layers network can be used as a spike train filter. It is
capable of selecting the signal that is the same as that from the control
neuron.

4.2 Non-linear Function Approximation

Let the input layer represent variable x and output layer represent
variable y. By using the STDP learning mechanism, the two-layers network
shown in Fig.7-11 can be trained to perform any non-linear function y=f(x).
At the training stage, a training stimulus is required to feed into the output
layer. As shown in Fig.7-11, the training layer can generate the target
stimulus according to f(x) and feed into the output layer. A series of stimuli
is randomly generated and presented to the input layer. At the same time the
training layer applies the series of stimuli to generate target stimuli for the
output layer. After STPD learning, the two-layer network can perform the
function y=f(x) without any training stimuli from the training layer i.e. after
removal of the training stimuli.
For example, an SNN with three 100-neuron layers was trained to
perform y=sin(x). The input layer is set to a circle chain with 100 neurons.
The zero degree corresponds to Neuron 50. The output layer and training
layer are set to 100 neurons respectively. If y is regarded as a one-
dimensional co-ordinate, the origin of the y co-ordinate is set to Neuron 50.
Let y=1 correspond to Neuron 94. Because stimulus is a bell-shaped firing
rate distribution, 6 neurons at the end of the neuron layer are used to deal
with the stimulus. Similarly, let y=-1 correspond to Neuron 6 instead of
Neuron 1. If a stimulus is presented at x, the firing rate distribution of the
bell-shaped stimulus is represented by following express.

cos( (x - x′))
N (7.9)
f x ( x′) = Rmax e δ2
7. Learning Mechanisms in Networks of Spiking Neurons 13

where Rmax is the maximal firing rate, N is the number of neurons in the
layer, x’ is the neuron numbers adjacent to the neuron at x position, and δ is
a constant. If x =0, the centre of stimulus is at Neuron 50. Note that not only
Neuron 50 responds to the stimulus, but also those neurons adjacent to
Neuron 50. This is very different from the values in classical neural
networks or digital numbers in Turing computers. In order to easily generate
the stimulus, the frequency can be transformed to Inter Spike Interval (ISI).
ISI for each neuron in x layer can be represented as follows.
x Input layer STDP Output layer y
180° +1
120° 0.75
60° 0.25
0° 0
-60°
-0.25
-120°
-0.75
-180°
-1
y
Fixed weights
y=f(x) +1
0.75
0.25
0
-0.25
-0.75
-1 Training layer

Figure 7-11. SNN trained with STDP for non-linear transformation

Figure 7-12. Weight distribution for connections between input and output neurons
14 Chapter 7

1
Tisi ( x ') = round (− log(rand ))+6 (ms) (7.10)
f x ( x ')

where x’ is a neuron number adjacent to position x, and f is the firing rate of


neuron x’. Note that a 6 ms refractory period is considered.
Stimuli for x and y are represented by stimuli that are firing rate
distributions described using (7.9) and (7.10). The value of x is randomly
chosen, and the value of y is calculated using the formula y=sin(x). This pair
of x and y stimuli are presented to the input layer and training layer
separately for 20 ms. The weight distribution is then updated by the STDP
rule. After 20ms, a pair of x and y stimuli corresponding to another random x
value is presented to the network for 20 ms. Repeating this procedure for
3000ms, the weight distribution converges to a stable distribution, as shown
in Fig.7-12. The red point indicates the connection with the highest value of
weight. With this weight distribution the two-layer network can perform the
function y=sin(x). Example test results are shown in Fig. 7-13.

90 °

(a) Input stimulus corresponding to 90 ° (b) Output corresponding to y=1

-60 °

(c) Input stimulus corresponding to -60 ° (b) Output corresponding to y=0.866

Figure 7-13. Stimulus input and output neuron firing rate


7. Learning Mechanisms in Networks of Spiking Neurons 15

4.3 Stimuli Integration

A cue integration model was proposed in [25]. However, the STDP


learning mechanism was not considered in the model. A similar SNN model
with the STDP learning mechanism is proposed in Fig. 7-14. Three neuron
layers x, y, z are connected to a 2D intermediate neuron layer. Suppose that
neurons in the x and y layers are connected to neurons in x-RF and y-RF
fields with excitatory synapses respectively, as shown by a solid line in Fig.
7-14. Neurons in the x and y layers are connected to neurons outside of the
x-RF and y-RF fields with inhibitory synapses respectively, as shown by the
short dash line in Fig.7-14. Neurons in the intermediate layer are fully
connected to each neuron in the z neuron layer via STDP synapses, as shown
by the long dash line in Fig. 7-14.

Fixed excitatory synapses


Fixed inhibitory synapses
Synapses determined by STDP

y-RF
y x-RF

Figure 7-14. Scratch for Architecture of Multiple Stimuli Integrating SNN.

(a) Weight neuron array to output neuron 1 (b) Weight neuron array to output neuron 13

Figure 7-15. Weight strength distribution for intermediate layer to z neuron layer.
16 Chapter 7

(a)Two input stimuli, upper row for x, lower row for y (b)Output of z neuron layer

Figure 7-16. Stimulus Test for z=x+y

When two stimuli are presented at the input neuron layers x and y, the
target stimulus for z=x+y is injected into z layer. The STDP synapses adapt
to the stimuli. After training, the weights between the intermediate layer and
the z layer are adapted to perform z=x+y. In the experiment, neuron layers x,
y and z have 20 neurons respectively. The intermediate layer has 20×20=400
neurons. The weight distributions for Neuron 1 and Neuron 13 in the z layer
are shown in Fig. 7-15. The test results are shown in Fig. 7-16.

5. SNN LEARNING FOR XOR PROBLEM

The traditional XOR problem and phase encoding scheme are applied to
illustrate STDP learning paradigm in this section. In the phase encoding
scheme spike trains are assumed in the same firing frequency. For different
spike trains, the firing time is at a different phase. For example, suppose that
the period is 10 ms and each phase corresponds to a time interval for 1 ms.
Each period thus contains 10 phases. In order to indicate the periods, sine
curves are plotted in Fig. 7-17. Phases also can be represented in radian or
degree. Firing time at phase 7 stands for logical ‘0’, and firing time at phase
2 stands for logical ‘1’. The logical ‘0’ and ‘1’ are represented by the spike
trains (a) and (b) in Fig. 7-17. The XOR problem can be represented as a set
of training patterns shown in Table 7-2. As it takes time for the action
potential to travel from delay neurons to neuron N1, N2, N3 and N4, the
output spike at phase 3 represents logical ‘0’, and output spike at phase 8
represents logical ‘1’. These patterns are applied to train the spiking neural
network shown in Fig.7-18.
Fig.7-18 shows the spiking neural network for the XOR problem. There
are two inputs and one output in the network. Each input is connected to a
set of neurons with a specific delay synapse. For example, input-1 is
7. Learning Mechanisms in Networks of Spiking Neurons 17

connected to a Phase 0 neuron without any delay, and it is connected to a


Phase 1 neuron with a delay 1 ms, Phase 2 neuron with a delay 2 ms, …,
Phase 9 neuron with a delay 9 ms. Similarly, input-2 is also connected to 10
delay neurons. Therefore, two temporal phase encoding spike trains are
transferred to activities of delay neurons, i.e. spatial-encoding patterns.

Table 7-2. Training patterns associations for XOR problem


Pattern No. Input-1 Input-2 Output
1 1-(ph7) 1-(ph7) 0-(ph3)
2 1-(ph7) 0-(ph2) 1-(ph8)
3 0-(ph2) 1-(ph7) 1-(ph8)
4 0-(ph2) 0-(ph2) 0-(ph3)

0 7 10 17 20 27 30 37
(a) Suppose that phase 7 (ph7) stands for logical ‘0’

0 2 10 12 20 22 30 32
(b) Suppose that phase 2 (ph2) stands for logical ‘1’

Figure 7-17. Phase encoding spike trains for logical ‘0’ and ‘1’.

Phase 0 Target output


Spike train (ph3)

N1
Input-1
N2 Output
Phase 9 N3
Phase 0
N4

Input-2
STDP

Phase 9

Figure 7-18. The spiking neural network for XOR problem


18 Chapter 7

Input-1 (ph7)

Input-2 (ph7)

Output (ph3)

(a) Test results for pattern 1

Input-1 (ph2)

Input-2 (ph2)

Output (ph3)

(b) Test results for pattern 2

Input-1 (ph2)

Input-2 (ph7)

Output (ph8)

(c) Test results for pattern 3

Input-1 (ph7)

Input-2 (ph2)

Output (ph8)

(d) Test results for pattern 4

Figure 7-19. Test results of the spiking neural network for XOR problem
7. Learning Mechanisms in Networks of Spiking Neurons 19

N1, N2, N3, and N4 are four pattern recognition neurons that are fully
connected to all delay neurons with STDP synapses. These connections
ensure that the network can adapt to the training patterns by the STDP rule.
Four pattern recognition neurons are connected to each other with inhibitory
synapses. These inhibitory synapses make a competition mechanism among
the four pattern recognition neurons. Once a neuron fires, the neuron will
inhibit other neurons firing. This makes it possible for one neuron to respond
to one stable input pattern. There are four patterns in the XOR problem. Four
neurons are employed in this layer.
If one wants to train the network to recognize XOR pattern 1 in Table 7-
2, the phase encoding spike train (b) is fed into input-1 and input-2. At the
same time, the target output spike train (ph8) is injected into neuron N1.
After about 150ms for STDP adaptation, the connection weights from N1 to
all delay neurons converge to a stable distribution, and the neuron N1 can
respond to the input pattern. Similarly, neuron N2, N3, and N4 can be
trained to recognize pattern 2, 3, and 4. After this, the network can perform
the XOR function. The test results are shown in Fig. 7-19.

6. SNN LEARNING FOR COORDINATE


TRANSFORMATION

The brain receives multiple sensory data from the surrounding


environments where the different senses do not operate independently, but
there are strong links between modalities [26], [27]. Electrophysiological
studies have shown that the somatosensory cortex (SI) neurons in monkeys
respond not only to touch stimulus but also to other modalities. Strong links
between vision and touch have been found in behavioural [28] and
electrophysiological [29] studies, and at the level of single neurons [30]. For
example, neurons in the somatosensory cortex (SI) may respond to visual
stimuli [31] and other modalities [32]. Neurons in a monkey’s primary SI
may fire both in response to a tactile stimulus and also in response to a visual
stimulus [31].
A new interaction between vision and touch in human perception is
proposed in [33]. These perceptions may particularly interact during fine
manipulation tasks using the fingers under visual and sensory control [34].
Different sensors convey spatial information to the brain with different
spatial coordinate frames. In order to plan accurate motor actions, the brain
needs to build an integrated spatial representation. Therefore, cross-modal
sensory integration and sensory-motor coordinate transformations must
occur [35]. Multimodal neurons using non-retinal bodycentred reference
frames are found in the posterior parietal and frontal cortices of monkeys
20 Chapter 7

[36], [37], [38]. Basis function networks with multidimensional attractors


[25] are proposed to simulate the cue integration and co-ordinate
transformation properties that are observed in several multimodal cortical
areas. Adaptive regulation of synaptic strengths within SI could explain
modulation of touch by both vision [39] and attention [40]. Learned
associations between visual and tactile stimuli may influence bimodal
neurons.
Based on these concepts, a spiking neural network (SNN) model [42] is
proposed to perform the co-ordinate transformation required to convert a
time-coded haptic input to a space-coded visual image. The SNN model
contains STDP synapses from haptic intermediate neurons to the bimodal
neurons.
In order to simulate location related neurons in the somatosensory cortex
(SI), suppose that X and Y are single layers of bimodal neurons that represent
the Cartesian co-ordinates of the output. Fig. 7-20 shows a simplified SNN
model for building associations between visual and haptic stimuli.

Retinal
neuron layer Vertical line
Training signals
Touch area Horizontal line

y X
(a) Attention at
touch point

L
2
Φ

L1
(b) Touch Y
θ
x

2D intermediate layer
Figure 7-20. A SNN model for 2D co-ordinate transformation. (x, y) is co-ordinate for touch
area. (a) Visual pathway: the retinal neuron layer is represented by 2D layer with 40X40
neurons that are connected to X and Y neuron layer with fixed weights. (b) Haptic pathway:
L1 and L2 are arms. θ and Φ are arm angles represented by a 1D neuron layer respectively.
Each θ neuron is connected to the neurons within a corresponding vertical rectangle in the 2D
intermediate layer. Each Φ neuron is connected to the neurons within a corresponding
horizontal rectangle in the 2D intermediate layer. The neurons in the intermediate layer are
fully connected to the X and Y neuron layers with STDP synapses. These connections are
adapted in response to the attention visual stimulus and haptic stimulus under STDP rules.
7. Learning Mechanisms in Networks of Spiking Neurons 21

If the eyes focus on a point (x, y) at the touch area, a visual stimulus can be
generated and transferred to the X and Y bimodal neuron layers through the
visual pathway. Therefore, the visual signal can be applied to train the SNN
for the haptic pathway. If a finger touches the point (x, y), a haptic stimulus
will trigger (θ, Φ) stimuli corresponding to arm position. The (θ, Φ) stimuli
are transferred to (X, Y) bimodal neuron layers through the haptic pathway.
In this model, the synapse strength for the visual pathway is assumed to be
fixed values. Each neuron in the X layer is connected to retinal neurons with
a vertical line receptive field shown in Fig. 7-20. Each neuron in Y layer is
connected to retinal neurons with a horizontal line receptive field. In this
experiments, Rmax for bell shaped stimuli is set to 80 /s, and δ is set to 0.04,
and 40 neurons are employed to encode the θ and Φ layers respectively.
1600 neurons are employed in the 2D intermediate layer and 80 neurons in
the training layer respectively. 80 neurons are also employed in the X and Y
layers respectively.
After training, the SNN can transform the (θ, Φ) stimuli to output (X, Y)
neuron spike activities. In order to test the SNN, suppose that the forearm
turns around with a speed 40° per second, as shown in Fig. 7-21. The circle
is the track of the finger. The values of (θ, Φ) are applied to generate Poisson
procedure spike trains for θ and Φ layers according to (7.9) and (7.10).
When the finger traces the circumference following the track of the circle,
two stimuli are generated corresponding to (θ, Φ) of the arm. The stimuli are
shown in the left panel in Fig. 7-22. When the two stimuli are input into the
network, the outputs of the (X, Y) neuron layers obtained are displayed in the
right panel of Fig.7-22. The neuron firing-rate at the output layer is a bell-
shape distribution. Transferring these firing rate to single values of X and Y,
we can demonstrate that the SNN is capable of transferring the polar co-
ordinate (θ, Φ) to the Cartesian representation (X, Y) as in the equations.
X= L[cos(θ )+cos(θ+Φ)] (7.11)
Y= L[sin(θ )+sin(θ+Φ)] (7.12)
The spike train raster in the upper-left panel in Fig.7-22 represents the
stimuli corresponding to θ = 180°. The stimuli persists for 8000ms. The
stimuli for the Φ neuron layer is shown in the lower-left panel. The stimuli
with bell-shaped firing rate distribution stays for 200ms in sequent positions
at Φ=0°, 9°, 18°, …360°. The changes of (θ, Φ) correspond to the finger
moving along a circle with radius L. According to (7.11) and (7.12), the
output X = L(-1- cos(Φ)) and Y= - Lsin(Φ). These mathematical results are
consistent with the SNN outputs shown in the right panel.
The results of learning are stored in the weight distribution of the
connections between the 2D intermediate layer and (X, Y) layers. After
learning, the haptic pathway in the SNN can transform the arm position (θ,
Φ) to (X, Y) bimodal neuron layers. Actually, θ and Φ are based on body-
22 Chapter 7

Yneuron 80
Yneuron 76
Y
2L=36
t=6000ms L Φ

L
Xneuron 1 t=8000ms Yneuron 40 θ X Xneuron 80
Xneuron 4 t=0ms Xneuron 40 t=4000m Xneuron 76

4L=72

t=2000ms

Yneuron 4
Yneuron 1

Figure 7-21. The track of finger movement.

θ=360 X=40

X=0

θ=0° X=-40
T=8000ms T=8000ms
Φ=360 Y=40

Y=0

Φ=0 Y=-40
T=8000ms T=8000ms

Figure 7-22. Co-ordinate transformation from body-centred co-ordinate (θ, Φ) to (X, Y).
7. Learning Mechanisms in Networks of Spiking Neurons 23

centred co-ordinates, which are polar co-ordinates. The neurons in θ and Φ


layers transfer haptic location signals to the intermediate layer, and then this
intermediate layer transfers the body-centred co-ordinate to the integrated
co-ordinate X and Y neuron layers. The STDP synapses make it possible to
learn and transform body-centred co-ordinate (θ, Φ) to co-ordinate (X, Y).
The co-ordinate (X, Y) can be regarded as integrated co-ordinates in the
brain. In this situation, co-ordinate (X, Y) is actually the retina-centred co-
ordinate. The transformation is equivalent to transformation from a haptic
body-centred co-ordinate to a retina-centred co-ordinate.

7. CONCLUSION

In this chapter, a number of spiking neuron models were mentioned, and


the conductance-based integrate-and-fire neuron model was introduced in
detail. All the demonstrations are based on this model. As spiking neurons
transfer information via spike trains, the neuron encoding scheme plays a
very important role in learning mechanisms. In this chapter, a circle of
neuron chain was applied to represent an angular variable. A neuron chain
was applied to represent a single variable. Based on these representations,
SNNs were trained to perform non-linear function approximation, and cue
integration z = x + y.
By using phase encoding scheme, a solution of the XOR problem was
demonstrated. All the learning mechanisms demonstrated here are based on
STDP. These demonstrations only give simple examples so as to assist in
understanding STDP. Based on these principles, more complicated SNNs
can be simulated in a further study.
In a biological system, there are strong links between modalities. A cross
modality learning model for co-ordinate transformation was proposed. In the
SNN model, the network was trained to perform co-ordinate transformation
from the arm angles of the haptic stimuli position to a position represented
by retina-centred co-ordinate.
The advantage of spiking neural networks is that they are more robust
and provides better noise immunity than classical neural networks, even if
some of the neurons do not work. The learning mechanisms can provide an
approach for designing artificial intelligent systems to process biological
stimuli.
24 Chapter 7

ACKNOWLEDGEMENT

The authors acknowledge the financial and technical contribution of the


SenseMaker project (IST-2001-34712), which is funded by the EC under the
FET Life Like Perception Initiative.

REFERENCES

[1] Maass, W., Schnitger, G., and Songtag, E.: On the computational power of sigmoid versus
Boolean threshold circuits. Proc. of the 32nd Annual IEEE Symposium on Foundations of
Computer Science. (1991)767-776
[2] Maass, W.: Networks of spiking neurons: The third generation of neural network models.
Neural Networks. 10(9): (1997)1659—1671
[3] Hodgkin, A. and Huxley, A.: A quantitative description of membrane current and its
application to conduction and excitation in nerve. Journal of Physiology. (London) Vol.
117, (1952)500-544
[4] Gerstner, W., and Kistler, W.: Spiking Neuron Models. Single Neurons, Populations,
Plasticity. Cambridge University Press, (2002)
[5] Melamed, O., Gerstner, W., Maass, W., Tsodyks, M. and Markram, H.: Coding and
Learning of behavioral sequences, Trends in Neurosciences, Vol.27 (2004)11-14
[6] Theunissen, F.E. and Miller, J.P.: Temporal Encoding in Nervous Systems: A Rigorous
Definition. Journal of Computational Neuroscience. (1995)2:149-162
[7] Bohte, S.M., Kok, J.N. and Poutré, H.L.: SpikeProp: Error-Backpropagation for Networks
of Spiking Neurons. Neurocomputing. 48(1-4) (2002)17-37
[8] Wu, Q.X., McGinnity, T.M., Maguire L.P., Glackin, B. and Belatreche, A.: Supervised
Training of Spiking Neural Networks With Weight Limitation Constraints. Proceedings of
International conference on Brain Inspired Cognitive Systems. University of Stirling,
Scotland, UK, (2004)
[9] Sohn, J.W., Zhang, B.T., and Kaang, B.K.: Temporal Pattern Recognition Using a Spiking
Neural Network with Delays. Proceedings of the International Joint Conference on Neural
Networks (IJCNN'99). vol. 4 (1999)2590-2593
[10] Lysetskiy, M., Ozowski, A., and Zurada, J.M.: Invariant Recognition of Spatio-
Temporal Patterns in The Olfactory System Model, Neural Processing Letters. 15:225 –
234, Kluwer Academic Publishers. Printed in the Netherlands, 2002
[11] Choe, Y. and Miikulainen, R.: Self-organization and segmentation in a laterally
connected orientation map of spiking neurons. Neurocomputing. 21(1998)139-157
[12] Sirosh, J., and Miikkulainen, R.,: Topographic receptive fields and patterned lateral
interaction in a selforganizing model of the primary visual cortex. Neural Computation. 9
(1997) 577-594
[13] Bi, G.Q., and Poo, M.M.: Distributed synaptic modification in neural networks induced
by patterned stimulation. Nature, 401 (1999)792 - 796
[14] Bi, G.Q., Poo, M.M.: Synaptic modifications in cultured hippocampal neurons:
dependence on spike timing, synaptic strength, and postsynaptic cell type. Journal of
Neuroscience. 18 (1998)10464–10472
7. Learning Mechanisms in Networks of Spiking Neurons 25

[15] Bell, C.C., Han, V.Z., Sugavara, Y., and Grant, K.: Synaptic plasticity in the mormyrid
electrosensory lobe. Journal of Experimental Biology. 202(1999)1339–1347
[16] Rossum, M.C.W., Bi, G.Q., and Turrigiano, G.G.: Stable Hebbian Learning from Spike
Timing-Dependent Plasticity. The Journal of Neuroscience. 20(23)(2000)8812–8821
[17] Neuron Software download website: http://neuron.duke.edu/
[18] Wu, Q.X., McGinnity, T.M., Maguire, L.P., Glackin, B. and Belatreche, A.: Learning
under weight constraints in networks of temporal encoding spiking neurons. International
Journal of Neurocomputing. Special issue on Brain Inspired Cognitive Systems. (2006) in
press.
[19] Müller, E.: Simulation of High-Conductance States in Cortical Neural Networks. Masters
thesis, University of Heidelberg, HD-KIP-03-22, (2003)
[20] Koch, C.: Biophysics of Computation: Information Processing in Single Neurons. Oxford
University Press, (1999)
[21] Dayan, P., and Abbott, L.F.: Theoretical Neuroscience: Computational and Mathematical
Modeling of Neural Systems. The MIT Press, Cambridge, Massachusetts, (2001).
[22] SenseMaker Project (IST–2001-34712) funded by the European Union under
the “Information Society Technologies” Programme (2002-2006)
[23] Song, S., Miller, K.D., and Abbott, L.F.: Competitive Hebbian learning though spike-
timing dependent synaptic plasticity. Nature Neuroscience, 3 (2000) 919-926
[24] Song, S. and Abbott, L.F.: Column and Map Development and Cortical Re-Mapping
Through Spike-Timing Dependent Plasticity. Neuron, 32 (2001) 339-350
[25] Deneve S., Latham P. E. and Pouget A.: Efficient computation and cue integration with
noisy population codes, Nature Neuroscience, 4 (2001) 826-831
[26] Marisa T.C., Kennett, S., and Haggard, P.: Persistence of visual-tactile enhancement in
humans. Neuroscience Letters. Elsevier Science Ltd, 354(1)(2004) 22–25
[27] Atkins, J. E., Jacobs, R.A., and Knill, D.C.: Experience-dependent visual cue
recalibration based on discrepancies between visual and haptic percepts. Vision Research.
43(25) (2003) 2603-2613
[28] Spence, C., Pavani, F., and Driver, J.: Crossmodal links between vision and touch in
covert endogenous spatial attention. Journal of Experimental Psychology: Human
Perception and Performance. 26 (2000) 1298–1319
[29] Eimer M., Driver, J.: An event-related brain potential study of crossmodal links in spatial
attention between vision and touch. Psychophysiology. 37 (2000) 697–705
[30] Graziano, M.S.A., and Gross, C.G.: The representation of extrapersonal space: A
possible role for bimodal, visual–tactile neurons, in: M.S. Gazzaniga (Ed.), The Cognitive
Neurosciences, MIT Press, Cambridge, MA, (1994) 1021–1034
[31] Zhou, Y.D., and Fuster, J.M.: Visuo-tactile cross-modal associations in cortical
somatosensory cells. Proc. National Academy of Sciences, USA. 97 (2000) 9777–9782
[32] Meftah, E.M., and Shenasa, J.: Chapman, C.E., Effects of a cross-modal manipulation of
attention on somatosensory cortical neuronal responses to tactile stimuli in the monkey.
Journal of Neurophysiology. 88 (2002) 3133–3149
[33] Kennett, S., Taylor-Clarke, M., and Haggard, P.: Noninformative vision improves the
spatial resolution of touch in humans. Current Biology. 11 (2001) 1188–1191
[34] Johansson, R.S., and Westling, G.: Signals in tactile afferents from the fingers eliciting
adaptive motor-responses during precision grip. Experimental Brain Research. 66 (1987)
141–154
[35] Galati, G., Committeri, G., Sanes J.N., and Pizzamiglio L.: Spatial coding of visual and
somatic sensory information in body-centred coordinates. European Journal of
Neuroscience. Blackwell Publishing. 14(4) (2001) 737-748
26 Chapter 7

[36] Colby, C.L. and Goldberg, M.E.: Space and attention in parietal cortex. Annual Review
of Neuroscience. 22 (1999) 319-349
[37] Gross, C.G., and Graziano, M.S.A.: Multiple representations of space in the brain.
Neuroscientist, 1 (1995) 43-50
[38] Rizzolatti, G., Fogassi, L. and Gallese, V.: Parietal cortex: from sight to action. Current
Opinion in Neurobiology. 7 (1997) 562-567
[39] Taylor-Clarke M., Kennett S., and Haggard P.: Vision modulates somatosensory cortical
processing. Current Biology. 12 (2002) 233–236
[40] Iriki, A., Tanaka, M., and Iwamura, Y.: Attention-induced neuronal activity in the
monkey somatosensory cortex revealed by pupillometrics. Neuroscience Research. 25
(1996) 173–181
[41] Thorpe, S., Delorme A. and Rullen, R.V.: Spike-based strategies for rapid processing.
Neural Networks.14(6-7) (2001)715-725
[42] Wu, Q.X., McGinnity, T.M., Maguire, L.P., Belatreche, A. and Glackin, B.: Adaptive
Co-Ordinate Transformation Based on Spike Timing-Dependent Plasticity Learning
Paradigm. Proceedings of The First International Conference on Natural Computation,
LNCS, 3610 (2005)420-429

View publication stats

You might also like