Search for notes by fellow students, in your own course and all over the country.

Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.

My Basket

You have nothing in your shopping cart yet.

Title: Computational Neuroscience 2010 paper answers (UCL)
Description: Computational Science module for Neuroscience BSc at UCL I got 69 in the module and a first class degree in Neuroscience

Document Preview

Extracts from the notes are below, to see the PDF you'll receive please use the links above


2010
1
...
There
are input neurons, connection weights and output neurons, but there is no physical structure or stimulation of
spikes
...
In the artificial network,
if the total amount of input activation and connection weights are limited, the maximum net input and thus the
output firing rate occurs when the pattern of input activation and connection weights match
...

Real neurons are much more complicated
...
This
has a time constant due to passive current leakage, so it takes longer to reach firing threshold
...
There are multiple ion channels interacting with each other
...
If there are multiple
layers and the neurons have feedback, then error back propagation can be calculated
...

Describe Hopfield's (1995) scheme for learning and processing information using temporal coding (6 marks), and
compare it with the more standard artificial neural network approach in terms of the computations they can
perform and the how they could be implemented in the brain (5 marks)
...
This mode of computation
explains how one scheme of neuroarchitecture can be used for very different sensory modalities and seemingly
different computations, e
...
olfactory bulb has theta oscillation frequency due to inhibitory input from the medial
septum
...
The way to
tell how strong an input a cell has received will be in the time advance, how early in the cycle does the cell fire a
spike
...
Because of the oscillation wave of the cell potential, if a big input fire early in the wave, the
oscillation wave will be boosted above firing threshold, but if a small input fire early in the wave, it will not be
enough to bring the oscillation wave above firing threshold
...
The cell is also arranged that
once it has fired, it will be hyperpolarised so that it won’t fire again until the next oscillation
...
So if you want the output neuron to
selectively fire for this pattern of input neurons, you need a long conduction delay for neuron 1 (the one with the
highest current and fire the earliest in the cycle), and short conduction delay for neuron 4 (as it is the one with the
lowest current and fire the latest) to cancel out the timing difference
...
So now the output neuron will only fire for this pattern of input
...

The network looks like a perceptron, but the firing rate is substitute for timing of spikes and the connection weights
are substituted by conduction delays
...

Standard neural networks have to normalise inputs to do scale-invariant recognition, normalisation could be
achieved by feedback inhibition, but this still poses a problem
...
A strong input neuron would usually have a strong connection and a weak input
neuron would have a weak connection and make less of a difference to the final output
...
However, if the network is pattern based, then the weak input would
be crucial in identifying the overall input
...
Linear pattern separation, e
...

perceptrons, are not suitable; head direction and place cells, the network that tunes to respond at a given absolute
value also cannot compute scale-invariant recognition
...
Describe the main features of Hebbian learning rules, and their use in competitive learning and the learning of
topographic maps (6 marks)
...
This is a form of long-term potentiation
...
Hebbian learning is a form of unsupervised learning meaning that there is no teacher or feedback
about right or wrong inputs
...
There are random connections weights to start with,
then input pattern is presented
...
There are recurrent inhibition between output
neurons so there is only one ‘winner’
...
Normalisation of connection weights then occurs and the next
pattern is presented
...
g
...

In Wilshaw and Von der Malsburg’s model (1976), the connections between nearby output neurons are excitatory,
but the connections between far apart output neurons are inhibitory
...
In Kohonen’s feature map (1982), the Hebbian learning rule is
modified so that weights to outputs neighbouring the winner are also modified using a ‘neighbourhood function’
...

The delta rule changes the connection weights according to:

Wij  Wij + ɛXjn (tin –oin)
(tin –oin) is known as delta, it is the error made by output i for the input pattern n
...
So basically, the delta rule changes connections weights to make a specific input pattern lead into
a specific output pattern
...
You can present patterns to the perceptron, the patterns that are meant to be detected will have
target output value 1, and the other foil patterns will have target value 0
...
So
that if the target value is bigger than the actual output value, then the connection weight increases, if target value is
smaller than the actual output value the connection weight decreases
...


2010
The delta rule only learns connection weights, but the threshold value is also important, as it is a determinant of the
decision boundary
...
Threshold = output (O), and assume XO= -1, then WO can serve the
same purpose as the threshold value, i
...
WO=T
...
Each output is trained using the delta
rule independently of the others
...
g
...
Hidden processing units can give the perceptron more power, so a single layer perceptron become
multi-layer
...
The internal representation is not known, so there
must be error-back propagation to change each layer of the network using a generalised delta rule
...
In this case, changing a
connection weight from any active neuron anywhere in the network will change the output firing rate slightly
...
The effect of changing a connection weight
depends on the values of the errors, the different connections and the activations upstream
...

Unsupervised learning- Hebb’s law
...

However, it does not take into the temporal factor
...
If the post-synaptic neuron fires before pre-synaptic neuron, there tend to be LTD
...

Hopfield’s (1982) associative memory network uses unsupervised learning, but the symmetrically connected
recurrent networks and limited storage capacity makes it less biologically plausible
...

Error propagation in supervised learning is not biologically plausible because the connection weights are changed
using non-local information
...

4
...

Place cells fire whenever a rat is in a given location (O’Keefe and Dostrovsky 1971)
...
They are mainly found in CA1 and CA3 of the hippocampus proper, but have also been found
in the subiculum and entorhinal cortex
...
However, in narrow arm mazes, where the animal is running back and
forth on linear tracts, the place cells fire differently depending on the direction of movement of the rat
...
If all cues are removed, the place cells might rotate slowly because
the mouse doesn’t known the definitive location anymore
...
Hippocampal pyramidal cells are the final, output layer of the device
...

Sensory inputs  competitive learning  entorhinal cortex with functions similar to place cells, but not as
accurately tuned  completive learning  place cells in the hippocampus (CA1 and CA3)
O’Keefe and Burgess (1996) - Place cells also have cues from boundary vector cells (BVCs)
...
Sharper tuning of

2010
BVCs for shorter distances, more broad tuning for longer distances
...
Place fields are modelled as the threshold sum of 2 or more BVC firing fields
...
Place cell firing in new
environmental layouts can be predicted using BVCs
...
2009)
...
The BVCs model is good at capturing sensory aspects,
but it doesn’t take learning into account
...
Place field stability in a novel
environment depends on NMDA receptors in CA3 (Kentros et al
...
2002)
...
2002) and
place cells distinguish locations with experiences (Barry et al, 2006)
...

Head direction cells: Ranck (1984) first discovered them in the dorsal presubiculum
...
The primary correlate is the azimuthal orientation of the head in the horizontal plane
...
(2005)
...
Nearby gird cells have grids of similar orientation and
scale, but they are shifted to tile the environment
...

Describe how these cells could be used to provide a spatial navigation system enabling the rat to return to a
previously visited location (8 marks)
...

Zhang (1996) and McNaughton et al
...
Imagine an environment bound by the x- and y-axis with a rat running around in it
...
Each cell’s location reflects the location of
its firing field in the environment (note that this is not how it is arranged in the brain)
...
So a map is
formed where each cell in this network is strongly connected to those next to it and weakly connected to those far
away
...
The
bump (z-axis plot) indicates the rat’s location and moves as the rat moves
...
If the
connections between all the cells solely reflect the cells separation, so that nearby cells have stronger connections
than far apart ones, the set of ‘bumps’ of activation form a continuous attractor
...
The cells at the current location excite the cells near it, but inhibit
the cells far away to ensure that there is only one ‘bump’, i
...
one location represented in the brain
...

The bumps can be moved by adding asymmetric connections in a particular direction
...
Shifter cells have
been proposed (Burgess and Barry 2014)
...
The first one bases firing rate on
direction and speed
...

The multiplicative synapses introduces asymmetrical connections to make the bump of activity shift along that
direction
...
However,
multiplicative cells do not exist in the brain
...
These shifter cells give input to the next
cell along in the network
...
An
asymmetrical connection is added by a place cell receiving excitatory input from neighbouring place cell and shifter
cell
...
This model is slightly more realistic
...
The
network ‘wraps’ around because of orientation phase
...
Connections can be fine-tuned regardless of the specific location and environment
...

Conjunctive (grid and direction) cells (Sargolini et al
...
The grid cells firing rate is
modulated by direction in this case
...
These shifter cells are found in the grid
cell network, not the place cell network, which is more inferential evidence for grid cells’ role in path integration
...
Grid cells form the continuous
attractor network and it is doing path integration, but they also get sensory inputs from the place cells to reset the
error, and they are linked to sensory cues
...
So these two networks
are interconnected, both with a slightly different role but both giving inputs to the other, so that their activity can be
updated when they don’t have the input they each need
...
This supports the idea that place cells ‘attach’ grid cells to the
sensory environment (Barry et al
...

Briefly discuss how “latent learning” in the absence of a goal and “temporal credit assignment” over a sequence of
movements might occur within spatial navigation (4 marks)
...
The previous exploration tunes place cells, which helps the animal
when it does get the reward
...
1997
...
Then suppose that those cells are downstream of the place cells, coding for
the goal location, so that the ‘goal cells’ receive synaptic inputs from the place cells
...
‘Goal’ cells will
fire the strongest when it’s at the goal location, and it will decrease firing as rat slowly moves away from the goal
location
...
This means the rat just has to move around so as to
increase that firing rate of the goal cell to get back to the goal
...
This goal cell provides an internal estimate of
how well the animal is doing
...

Brown and Sharp (1995) developed another model which involves the use of a recency-weighted cumulative
Hebbian rule to solve the temporal credit assignment problem
...
There are two complete copies of place cells, and they project to the nucleus accumbens
...
There are inhibitory connections between these two groups of cells
...

When animal reaches the reward location, connection strengths are altered
...


2010
The head direction to NAcc connections use the recency-weighted cumulative Hebbian rule
...
If the
animal keep track of this information, when they do finally get to the goal, they can look back and think about which
connections recently had Hebbian activity, then give those connections the deserved ‘credit’ for taking us to the
goal
...

5
...

Hopfield’s (1982) associative memory network is a form of unsupervised learning
...
Learning occurs by imposing
pattern of activation, and use the Hebbian rule to change the connection weights
...

To support a pattern of activation, connections should be positive between units in the same state and negative
between units in different states
...
The update
rule changes each unit’s activation to reduce the overall frustration, until the network ends up in a stable state from
which it cannot be reduced further
...

Basically, the idea is to impose a pattern of activity onto these neurons for them to remember that activity, so if one
were to present an input pattern with similar activity, it will remember what the original, imposed activity was due
to change in connection weights
...

The minima is a pattern of network that is stable, least frustration/energy of the system
...
During retrieval, or pattern completion, the partial cue is outside the minima, and then it propagates
activity around the network to find the stable state, decreasing its energy to the minimum
...

In an auto-associative mode, spurious memories often form
...
The memory capacity is limited to about 0
...
of neurons connected
to each other
...

Also learning and retrieval have to be separated
...
So the brain needs to differentiate encoding and
retrieval of cues, otherwise spurious memories form
...
When a
new pattern is imposed, the detonator synapse is activated
...
During learning, pattern
is imposed on the network with such strength that it overrides any retrieval
...

CA3 neurons in the hippocampus have recurrent connections to other CA3 neurons
...
This is a classic network of the
hippocampus
...






CA3 neurons receive input from perforant pathway from the entorhinal cortex, these are weak distal
synapses
...

o The inputs are small in number but very powerful- a single DG synapse can make a cell fire; they are
detonator synapses
The other set of inputs from the perforant path of the entorhinal cortex could be used for retrieval
...
Then EC present
partial cue and causes retrieval of the pattern
...
1995
A way to separate learning and retrieval by using neuromodulator
...
Ach coming from medial septum and it changes properties
of neurons in CA3 to go between learning and recall
...
So during learning, there is a large flood of Ach to suppress recurrent excitatory
connections, enhance synaptic plasticity and boost detonator synapses
...


There are problems with this model: slow- neuromodulators will take a few seconds to diffuse
...
It is known that there is a lot of acetylcholine release from the medial septum in rats
when they are in a novel environment; Ach modulates synaptic plasticity in the hippocampus; it is know that it
suppresses recurrent connections
...
Perforant path synapses in the dentate gyrus undergo self-organisation to form new representations of input
form entorhinal cortex
2
...
Excitatory recurrent connections in stratum radiatum (in hippocampus) mediate auto-associative storage
and recall of these patterns
4
...

5
...
The comparison of recall activity in region CA3 with direct
input to region CA1 regulates cholinergic modulation, allowing a mismatch between recall and input to
increase ACH, and a match between recall and input to decrease Ach
...

There are two ways in which serial learning might be implemented in neural networks: associative chaining or
competitive question
...
The network
uses asymmetric recurrent connections and have feedback from output neurons to the input neuron, leading to

2010
dynamic output
...
g
...

Jordan (1986) and Elman (1990) were interested in motor control and how you can produce stereotyped sequence
of output given a particular input by using error back propagation
...

The Hopfield associative network can also be modified by abandoning the assumption that weights are symmetrical
and modify the learning rule to associate one pattern to the next
...
What happens when there is a repeat? E
...

spelling out ‘every’
...
Another example is learning to speak, cat, tack and act
...

A solution to this is to have context-specific coding, called workaround method (Wickelgren 1969)
...
This is similar to the way Jordan/Elman
networks eventually learn to solve the problem using hidden units, but you need a lot of different units, and
separable learning in each unit, which is labour intensive
...
Take a look at the error that the brain
makes, e
...
trying to remember a sequence of numbers
...
This doesn’t fit well with association
chaining model
...
e
...

Competitive queuing has been argued as a more plausible mechanism
...

Using the same cat, tact and act example again
...
e
...
And these sounds are connected by competitive filter, so the most active unit
wins and perform the associated action, then the active unit is suppressed by lateral inhibition
...

This network has a dynamic control signal so it can deal with repeats
...
There are one-to-one excitatory connections
from item units to corresponding filter units
...
Filter units have mutual inhibitory connections (lateral inhibition)
...
g
...
First activity pattern is associated with E, second associated with V, and third associated with
E again and so on
...
When the context signal is replayed, the item units are reactivated in
the order they were perceived
...
It is not that R will
lead back to E, in fact whatever is stopping R from firing, by some temporary noise or something, R is still going to be
the most active neuron
...
So when
something is missed out, it comes back in the next step making a transposition error
...
Dynamic control signals allow representation of repeated items
...

6
...

Basically each neuron has a preferred firing direction, and all the neurons of different firing direction has different
firing rates
...
This is a good way of averaging across all the noisy neurons to get a good overall estimate
of which direction the neurons are encoding
...

and summarise some of the evidence that this type of coding is used in the brain (4 marks)
...
g
...

Population vectors in the motor cortex neurons play a role in planning the direction of movement, they are not
simply a feedback signal of the direction movement from proprioception (Georgopoulos et al
...
In this
experiment, the monkey was trained to reach at 90o to the direction of LED light
...
As time goes on, the vector increases in size and point at 90 o
away from the LED
...

Population vectors can also be used for place cells and head direction cell firing
...
However, all values are not represented equally, there is a central tendency
...

Population vectors have been proposed to be used in inter-aural time difference detection (McAlpine et al
...

What is a 'convergent force field' for limb movement following spinal stimulation (6 marks)?
There are several problems in the control of limb movement:




Under-determined joint configuration
Unknown muscle tensions
Unknown dynamics- how to get to desired end configuration

A solution in the spine has been proposed by the use of convergent force fields in the limb and the equilibrium point
control hypothesis
...
(1995)
...
When the lateral ventral horn
is stimulated, large amplitude movements are generated in the same direction from wherever the frog starts from
...
When he stimulated the lateral neurophil region (interneurons), he found patterns of movement
were generated that converged on a particular equilibrium position
...

So depending on where the frog leg starts, the force generated would be varied and take the frog leg to a single
location, called equilibrium position
...
All equilibrium positions are usually at the edge
of the limb’s possible movements
...
There is
complicated wiring in the spinal cord, where a blob of activity caused by glutamate is used to produce a movement
that is always towards the same location
...
The animal can combine CFFs to form an intermediate equilibrium
position by taking the average vector in each location
...
found
...
So by combining them, the leg
can move to any intermediate position
...
Instead of having the brain
sending complicated combinations of activations of motor units, the brain can just send activity to a small no
...


2010
The experiment was done in frogs
...
Primates have more direct projects from CNS to individual muscles without much
processing in the spine
...

Although gross limb movements may still have control from spinal cord
...

Lukashin et al
...
Then these neurons project to a small number of CFF generators,
simulating what’s going on in the spine
...
e
...

Appropriate connection strengths were found using a non-biologically plausible computer search algorithm
...

He demonstrated that the motor cortical population vector (15 neurons) could be connected to 4 CFF generators so
as to control a 6 muscle artificial arm
...
Although some argue that this algorithm is only biologically
plausible in the rat or frog, not primates
...
Describe the neural processing of odours by rats (5 marks) and how it can enable the rat to detect a weak
familiar smell among other stronger familiar smells (5 marks)
...
1990): rodents can detect weak smells that are important to them,
e
...
smell of chocolate for food or smell of cat signifying predator
...
This rhythm is related to the theta rhythm in the hippocampus and
olfactory bulb
...
The activity has cyclic
inhibition via medial septum at 10 Hz
...

Mitral cells project to piriform cortex, which is the first part of neocortex that gets olfactory input
...
Mitral cells project to layer 2 pyramidal cells with excitatory
connectivity
...
This is inhibitory feedback
...
So neurons which are always
representing the same familiar odour, tend to fire at the same time
...
So if there is a little bit smell of chocolate, it will activate other neurons that respond to chocolate
...
Different subnets represent different smells
...
The layer II neurons always project back inhibitorily into the
olfactory bulb
...
This is a neural implementation of competitive queuing
...

The authors researched this mechanism via a simulation experiment
...
A was
twice as strong as C
...
After learning of what A and C smells like separately, on the first sniff of A/C odour
mixture, all the A neurons reactivate each other and inhibited firing of C neurons
...
Then the second sniff C neurons were active as neuron A have inhibited each other
...


2010
Rescorla-Wagner rule (1972)
...
The model consists of a
stimulus and a reinforcement neuron and a weighted connection between them
...

If the stimulus is present, the connection weight should change
...
So the connection weight
become its previous value plus a small positive constant multiplied by the size of the
reward
...
So it becomes its previous value multiplied by a
value that is less than one, so the connection weight decays
...
Delta is the actual reward delivered minus the expectation of
the reward, which is the net input strength, S x w
...
If stimulus not present, S=0
...

If r = wS, then δ = 0
...

If r > wS, then δ = +n, so the connection weight increases
...

When there are multiple stimuli, there is one delta rule for all stimuli
...


δi = δ = r -V; V = Σi wiSi
V is the expected reinforcement r given all stimuli
...
e
...
If the
reward is present but S2 wasn’t, then the connection weight from S2 to r won’t increase because of the delta rule
...
He used block
and overshadowing
...
Another problem is time, it is important that S2 precede S1 for the association
...

Temporal difference learning- Sutton
We need V(t) to predict the sum of future rewards, not just r(t) (the immediate reward), so we can learn S2 S1 
R, i
...
we want the value function to average all possible future actions and things we don’t understand
...
If time tao is greater than time t, what’s the sum of all the rewards that
the animal might expect in the future?
If V(t)

= < Στ≥t r(τ) >, then V(t) = r (t) + V(t+1)

r(t) is the current reward and V(t+1) is the estimate of subsequent reward
...
e
...
e
...

A model network need representation of time as an input
...
In
experiments, the time is usually represented as each trial run for an animal
...

As time passes and things happen, the connection weights should change to estimate the value of the situation in
terms of how much reward we predict to get in the future
...


So in trial one, V(t) is zero as there is no expectation of reward, δ is also zero as r(t) is zero, V(t+1) is zero and V(t) is
zero
...
So the input neuron that was ative when the
reward occurred would be active, and its connection weight to V(t) increases
...

δ(t) is positive just before the reward, because r(t)=0, v(t+1)=1 (from previous learning) and V(t)=0
...
The actual reward is 1, V(t) is 1 because we fully increased our estimate of
value of tR because of the increase in connection weight, i
...
reward is fully predicted
...


2010
On trial 3, V(t) is positive for tR and tR-1
...


Many trials later, the expected future reward V [V(t)] increases as soon as CS occurs
...


Operant conditioning also occurs where animals learn to gain rewards based on action
...
Useful in situations where there is no reward
...

“Actor-critic” architecture: use value function V to decide actions to maximise expected reward
...
Dopaminergic neurons
send axons to neocortex, striatum, cortical structures such as amygdala and hippocampus
...

Dopamine responses interpreted as δ(t)
Schultz et al
...

When animal has been trained to expect
reward with CS, the dopaminergic neurons
just after CS, but not at the reward
...


no

Delta is reward prediction error
...
After training, delta
after CS because V(t+1)=1
...
e
...
e
...
Dopaminergic neurons firing rate mirrors delta value, so
they could be signalling delta
...
Initially, policy is random
...
Ventral
striatum (nucleus accumbens) receives projections from hippocampus, so it’s more to do with estimating value
...
So they can act as
‘actor critic’ system
...

During the middle, the firing might shift to between CS and R
...
The putamen shows the strongest match to this fMRI signal (Seymour et al
...



Title: Computational Neuroscience 2010 paper answers (UCL)
Description: Computational Science module for Neuroscience BSc at UCL I got 69 in the module and a first class degree in Neuroscience