Search for notes by fellow students, in your own course and all over the country.
Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.
Title: Perception
Description: Topics covered include light, eye and brain structure and function, spatial vision, depth perception, motion perception, colour vision, multisensory processing, visual development, object perception, face perception.
Description: Topics covered include light, eye and brain structure and function, spatial vision, depth perception, motion perception, colour vision, multisensory processing, visual development, object perception, face perception.
Document Preview
Extracts from the notes are below, to see the PDF you'll receive please use the links above
FINAL EXAM STUDY NOTES PERCEPTION
Week 7 Light, Eye, Brain, and Spatial Vision
...
The eye and transduction of light
Stimulus is light
...
Light can be rays, waves, or particles
...
Light wavelength is depicted in the electromagnetic spectrum
...
Contrast Light intensity – the intensity of a light source ultimately depends upon the no
...
But because not all wavelengths are visible, LI is usually
specified as photometric units to account for human sensitivity
...
Sunlight and starlight)
...
Contrast is a
useful measure of relative luminance
...
Some surfaces reflect a greater proportion of the light
that hits them than other substances
...
White paper reflects 75% light, whereas black paper
only reflects 5% light
...
When Lmax + Lmin
the contrast iszero, which means there is nothing to see
...
The eye is made up of the cornea (transparent membrane surrounding eye ball which
controls admission of light), the pupil (where light enters the eye), the lens which enables
change of focus through ciliary muscles, and the retina which is a light-‐sensitive membrane in
the back of the eye that contains rods and cones and which receives an image from the lens
and sends it to the brain through the optic nerve
...
Focusing involves light rays from a single point which is spread in multiple directions being
recombined by the lens to form a single point on the imaging surface
...
A suitably shaped lens will refract rays so that
they converge back to a point after they emerge from the lens
...
The cornea is curved and so refracts a constant amount, and the light refracts
light by a variable amount
...
Focusing errors can be attributed to failure to
accommodate, for eg
...
Light
is focused in front of the retina need concave corrective lenses
c) Hyperopia (long-‐sighted) is where optical power is too weak and focal length is too
long
...
Vertical lines
vs
...
Transduction the retina of each eye contains over 100 million PR which convert light energy
into neural activity
...
PR are either rods or cones
...
Rods are more numerous than cones
...
Rods and cones pass electrical impulses to
ganglion cells which have long axons that exit the eyeball via a bundle called the optic nerve
...
The
retina is a complex network of bipolar, horizontal, and amacrine cells through which PR
reposnses converge onto ganglion cells
...
The
fovea (macula) has many receptors, and the optic disc has no receptors
...
The firing of a ganglion cell
could be affected by light falling over a range of locations on the retina
...
Receptive fields for foveal vision are smaller and more densely packed,
whereas further out in the periphery they are larger and less dense
...
The visual pathway carries responses from the retina to the visual cortex
...
Retinal ganglion cell axons terminate at the LGN, there is then a crossover at the
optic chiasm (partial decussation) where fibres carrying responses from the left visual field of
each eye terminates in the right LGN and visa versa
...
The visual cortex
comprises the major receiving area (V1) and numerous extrastriate areas, which have a
theory of function as what vs where or perception vs
...
These comprise V2,3,4,5 and
MST
...
Cortical cells respond selectively to stimulus, orientation, movement, direction, colour,
and binocular disparity
...
In all of the
cortical areas, cells are distributed in a highly organised pattern
...
Spatial vision
Receptive field every ganglion cell has a RF, which defines the area of the retina within which
light must fall in order to influence response of the ell
...
Centre-‐surround antagonism is lateral
inhibition of horizontal cells
...
The centre response reflects the
influence of bipolar cells and the opposing surround response is due to horizontal cells
...
If light occurs all over, or not at all, then it will result in spontaneous
activity only
...
On-‐centre responds to light increments, where off-‐centre responds to light-‐
decrements
...
It also allows us to
compensate for intensity of light source through contrast
...
Also, the Herman grid
grey patches are explained by smaller receptive fields in the fovea when excitatory
portions of the RF fit into the intersections, with inhibitory surrounds overlapping the dark
squares, we see grey circles (there is more inhibition on surround)
...
Lateral geniculate neuron has 6 layers
...
Layers 3-‐6 have small cell bodies, small RF, high resolution, slow response, low
sensitivity, and process colour and are called parvocellular
...
Each layer is retinotopically organised (neighbouring cells have RFs next to
each other)
...
Each LGN cell within each layer received information from
one eye only
...
Ipsilateral vs
contralateral eye
...
V1 is
retinotopic, with greater regions of the cortex devoted to the fovea for cortical magnification
...
Different cells
selct for different orientations
...
If the orientation deviates from preferred , activity is reduced
...
Small bandwidth = sharp tuning and large bandwidth = broad tuning
...
There are orientation filters, spatial frequency
filters, colour filters etc
...
Within a column, cells have
the same preferred orientation or ocular dominance
...
Cortical cells show selectivity for stimulus orientation eg
...
A change in orientation from cell’s preferred, even slight,
produces a marked drop in response
...
Orientation tuning arises because the zones are elongated rather than
circular and so optimal stimulus is also elongated
...
They can be bar detectors or edge detectors
...
They respond to an oriented edge
anywhere within their RF
...
They
prefer stimuli withan end within the RF
...
The tilt aftereffect is when prolonged exposure to
tilted bars makes subsequently seen vertical lines appear tilted in the opposite direction
...
Gratings and spatial frequency spatial vision relies fundamentally on detection of spatial
features, defined by spatial variations of image intensity contrast and luminance gratings
...
They contain
alternating bright and dark bars and have four defining properties: contrast, spatial
frequency, orientation and phase
...
Contrast can vary between 0
...
Spatial frequency of a grating refers to the fineness of the bars and how many bars the
grating contains per unit of distance
...
Between adjacent peaks
...
Grating sensitivity at a given spatial frequency is established by measuring the minimum
amount of contrast required for an observer to reliably discriminate the grating from a
uniform field
...
Gratings of
sufficiently high SF disappear entirely from the image because their bars are too close
together
...
Progressively, more contrast is lost from the image as spatial frequency increases
...
Many
neurons in the early stage of visual analysis have receptive fields so that a grating of a given
SF will selectively activate cells who RF sub-‐regions match the width of the bars
...
The shape of the CSF is thought to reflect the
responsiveness of the underlying RFs
...
Coarse-‐scale (blurry) carries info
about shape and structure whilst fine-‐scale (sharpness, edges) carries info about sharp edges
and surface textural properties
...
Fourier analysis much like with audition, any natural image can be decomposed into a large
collection of sine wave gratings at various spatial frequencies, orientations, contrasts, and
phases
...
Different populations
of neurons encode information over different ranges of spatial frequency in fourier spectrum,
that is neurons act as narrowly tuned spatial frequency filters
...
In neural terms, these mechanisms
corrospond with populations of SF-‐selective neurons in the visual cortex
...
* Binocular Depth Cues
There are four properties to depth cues:
1
...
Binocular – could you get the same information with only one eye?
2
...
Non-‐pictorial – could the information be available in a photograph?
3
...
Oculomotor – visual information comes purely from the retinal image and
oculomotor information comes from the eye muscle signals
...
Ordinal vs
...
Why do we have 2 eyes? Some animals have their eyes on the side of their head (laterally),
which is useful for prey species as it gives you 360 degree vision
...
Predator species have both eyes at the front (frontal) which allows visual
fields to overlap considerably, giving greater depth perception
...
Therefore, the disadvantage of having frontally placed eyes is
that it reduces total visual field, however, for objects in the binocular vision it gives us
binocular summation (two chances to see the object and lower detection/discrimination
thresholds), as well as depth perception (two separate viewpoints allowing us to extrtact
depth information)
...
Fixated objects fall on the fovea in each eye, but each eye has to be
poiting in the correct direction for fusion to occur
...
It
is binocular, non-‐pictorial, oculomotor, and metric
...
Each eye has a unique viewpoint, and hence sees subtly different things
...
These differences in the retinal position of
objects in the two eyes signals depth (Wheatstone)
...
So, the
position of the image in the two eyes is directly correlated with the object’s depth
...
BD is visual, pictorial, binocular, and metric cue
...
We can see from the above image that the circle and the cross that L and R images fall on
corresponding points of each retina, however, for the triangle, half the image falls just to the
left of the retina on the left, but a long way to the left of the retina on the right eye
...
The horopter is a line of all possible locations where an object’s half-‐image falls on
corresponding points
...
It is the locus
of points in space that yield single vision anatomical identical points falling on
corresponding points on the retina
...
But if we have two completely different viewpoints from each eye, how is it that we see one
object, rather than having double vision? Because of fusion
...
This usually occurs when there is poor depth
information and the eyes cannot converge to the target image, impairing the function of extra-‐
ocular muscles
...
The horopter changes
dependant upon your fixation difference
...
If you fixate on an object with both eyes, you converge on it
...
Focus relates to the shape of the lens; if the current state of the lens causes light rays
from a specific distance to meet at a single point on the retina, objects at this depth will be in
‘focus’
...
5cm apart from left to right
...
Many methods have been derived for this:
-‐ Mirror stereoscope – devised by Wheatstone
...
The images need to be reversed, because they are being reflected off a mirror
...
First ever 3D viewing device
...
Eyes converge towards a central point and lenses divert the direction of gaze to two
different images
...
-‐ Anaglyphs – not very popular these days, used mostly in 1950’s/60’s
...
It allows multiple viewers, vergence
is natural, but the colour is a bit strange
...
It allows many viewers eg
...
-‐ Temporally Interlaced Displays – left and right eye half images are displayed one
after another on a TV/monitor
...
Can be used for modern TVs and vergence is natural
...
The flicker rate can be a problem
...
Can be viewed without glasses, but only allow one viewer
and there is limited viewing positions
...
You need to adjust vergence to align neighbouring regions
...
Magic Eyes
...
With RDS,
the left eye sees a bunch of dots and the right eye sees a bunch of dots, and when the two are
presented together we see two square regions (in depth) that were not present in either half
image
...
* Monocular Depth Cues
Familiar Size refers to the fact that many objects tend to be a particular size and it is always
the case that as a thing becomes more distance, the retinal image becomes smaller
...
However, this only works for images where
we know its constant size
...
It is visual, and pictorial
...
Perspective is the property of parallel lines converging in the distance, at infinity, allowing us
to reconstruct the relative distance of two parts of an object or landscape
...
This is a
pictorial, visual, metric cue
...
Faked perspective makes you thin things are
closer/further or bigger/smaller
...
Closer things move more quickly, more distant things move
more slowly
...
Height in the visual field – assuming that things generally tend to rest on a ground plane,
things higher up in the visual field are often more distant
...
It is pictorial, visual, and
metric
...
More distant objects tend
to be lower in contrast, lighter, and more blue
...
) fade the image by scattering light
...
The contrast-‐depth relationship is dependant upon local atmosphere,
so you need to be aware of that atmosphere to understand the distance
...
Occlusion is when near surfaces overlap far surfaces
...
However, this only
provides information to allow observer to create a ‘ranking’ of relative nearness, it is not
metric
...
Cast shadow is the way that light falls on an object and reflects off its surfaces, and the
shadows cast by the objects provide an effective cue for the brain to determine shape and
position in spcae of an object
...
The
seperation between object and shadow reepresents height of an onbject in image
...
Attached shadow makes assumptions based on light source
coming from above
...
Rotating the image 180 degrees changes the perceived depth
...
Blur -‐ the object being viewed foveally is usually in focus, blur varies with depth compared to
viewed object
...
We can use this to determine what object is in
front and which is behind
...
Accomodation is how much the lens changes shape to maintain a sharply focused image as
object distance varies
...
It is non-‐pictorial, oculomotor,
and metric
...
According to misapplied constancy scaling, visual system
assigns a depth to an object, and perceived size is calculated with respect to this depth
percept
...
An
exaple of this is the Ames room, and Beever’s pavement art
...
Week 9 Motion Perception
Motion is a change in position of an object with respect to time also on its reference point
...
It is observed by
attaching a frame of reference to a body and measuring its change in position relative to that
frame
...
Speed = Distance/Time OR Speed = Temporal freq/Spatial Freq
...
In order for motion to
be sensed directly, we need to identify the the moving stimulus first
...
Instead, motion is sensed directly, by specialised mechanisms (as shown by
adaptation experiments)
...
The Reichardt Detector
detects motion by spatio-‐temporal correlation and are plausile models for how the visual
system detects motion
...
The stimulus must be of a specific speed
and direction for the receptor cells to fire
...
By combining neural units, we could be able to detect
motion (with the same speed) in either direction using the same two receptors
...
An example of this is motion
on TV
...
Apparent motion stiulates
motion detectors in the same way as real motion, that is, Reichardt detectors do not care what
happens in between the two receptors, just that one stimulus was stimulated and then the
other was stimulated
...
This effect is an optical illusion in which the wheel can appear to rotate
more slowly than truue rotation, and appear stationary, or it can appear to rotate in the
opposite direction
...
We tend to perceive the motion
corresponding to the shorter displacement, hence fast motion can appear backwards
...
It
also appears to change direction, from clockwise to anti-‐clockwise, as a 60 degree clockwise
movement looks exactly the same as 30 degrees anti-‐clockwise
...
But then what happens if there are two simultaneous “flashes” over each of the recepters?
Motion is signalled in both directions (T1 flash at A combines with T2 flash at B to signal right
motion and T1 flash at B combines with T2 flash at A to signal left motion
...
This breaks the reichardt detector model as
a model for human motion perception
...
The job of the comparator is to determine which unit has more activity
...
If they both have the exact same level
of activity, there is no motion at all
...
This can be through
increased detection thresholds for simular stimuli, reduction of perceived intensity of
stimulus, rate of increase of intensity steepens, or a biased percept
...
The stationary stimulus appears to move in the opposite direction to the original
(physically moving) stimulus
...
Neurons coding a particular movement reduce their responses with time of
exposure to a constantly moving stimulus; this is neural adaptation
...
Neural adaptation of neurons stimulated by downward movement reduces their
baseline activity, tilting the balance in favour of upward movement
...
An example of this is the
waterfall illusion
...
If we have a stationary stimulus, both
units respond at baseline so the comparator registers no difference we see no motion
...
The comparator registers more activity in down unit than up we see
downward motion
...
Then, when presented with another stationary stimulus, the up detector continues
at baseline but the adapted down detector is now lower than baseline, so the comparator
registers more activity in up unit, thus we perceive upward movement
...
Retinal ganglion cells LGN V1(4c) V1(4B) V2 MT+
The lower geniculate nucleus (LGN) consists of 6 major layers (magnocellular, parvocellular,
and koniocellular)
...
however, we still do not have
motion selective cells yet, but cells that are selective for properties that are associated with
motion
...
Then, within MT+ (aka V5), we
have many motion sensitive cells with large receptive fields that have preferred directions
represented by columns
...
The cells
respond to relative motion/optic flow/self motion which is pursued by eye movement cells
...
Patient LM) where you can perceive change of position/size,
but no perception of motion per se
...
* Motion Illusions
The Aperture Problem is where motion direction of a 1D feature seen through an aperture is
ambiguous, given no other information we tend to see the direction perpendicular to the 1D
feature, when we can’t really know the actual direction at all
...
) and they respond to stimuli
within a fixed receptive field (aperture)
...
The Barberpole Illusion is an example of this
...
Motion appears to be along the long-‐axis, but there
are two long axis, with the single oblique grating seen through a cross shaped window, so
motion appears upward along vertical arm and rightward along horizontal arm
...
A terminator is a signal placed at edges which act
to ‘fill in’ for the ambiguous central motion signals
...
They can
intrinsic (at the edge of an object) or extrinsic (the occlusion of the object by another, closer
object)
...
Depth cues can tell us which terminators are which, through
occlusion, disparity, and shadows
...
Physiologically speaking, end-‐stopped neurons process terminator
motion as early as V1 and involve hypercomplex cells
...
* Inflow vs
...
Pursuit)
...
Inflow theory
(Sherrington) holds that eye movement signal comes from eye muscle proprioceptors which
sense muscle stretch and send signals to the comparator
...
For moving eyes around with an after image: Inflow has no
retinal motion but does have muscle movement signal, and outflow has no retinal motion, but
dos have an efference copy command
...
Week 10 Colour Vision
...
If the process is repeated,
there is no further decomposition, however, you can reverse the process so that colours
converge to form white again (recomposition)
...
Different wavelengths apper different colours,
with human visible wavelength ranging from 400nm-‐700nm
...
Light is a kind of wave
...
Different
pure, single wavelengths appear to us as different colours
...
Colour
is purely subjective experience, it does not exist in the real world and exists only in your head
...
* Reflections of light and Colour mixtures
...
However, colour exists on a continuous spectrum; the intensity of the
wavelength is what determines your colour percept
...
For example, black paper
absorbs most light of all wavelengths, and reflects very little, therefore appearing black
...
Yellow
paper appears yellow if it absorbs all wavelengths except yellow, which is reflected
...
Another misconception is that white and black aren’t colours, merely shades
...
Colour is not a physical thing you can measure, and white and black are experienced the same
way as any other colour
...
Though the
combination of short and long wavelengths (red and blue) make purple, the wavelengths of
light have not changed
...
These are additive colour mixtures
...
Most paint reflects more than just a single, pure wavelength of light
...
Yellow paint reflects mostly
yellow light, but also a bit of green and orange
...
The more colours of paint you add, the more light is absorbed/subtracted
and the darker the patch looks
...
Rarely in nature is there one single pure wavelength, just as with hearing and spatial
vision, there is a spectrum of waves of different intensities at different wavelengths
...
The great biological importance of photoreceptors is that they convert
light (visible electromagnetic radiation) into signals that can stimulate biological processes
...
Photoreceptors can be either rods or cones
...
Cones have several types, each tuned to a different wavelength,
are mostly concentrated in the fovea, are less sensitive than rods, and are used for day vision
(phototopic vision)
...
Cones are not
sensitive enough in low luminance and so only rods respond
...
The reason for this is the Principle of Univariancec,
which dictates that photoreceptor responses vary only along one dimension a rod cell
firing can change the amount of firing only, it cannot change the type of firing
...
The cell cannot tell the difference
between colours
...
Therefore, it would have the exact same response to a
wavelength of 450nm (blue) as to 630nm (orange) light
...
Receptors show broad wavelength tuning
...
Hence, peak response is at 500nm, but smaller responses can be generated to
most wavelengths of visible light
...
Rods are “bleached out” (over-‐stimulated) in high
luminance, so colour vision is determined by cones
...
* Two receptor system and Trichromatic theory
The Two receptor System supposes that colour is represented by the relative activation of 2
cone channels/types
...
This would llow us to tell a wavelngth change from an intensity change
...
So, for a
one-‐receptor system like rods, all wavelengths can be metamers for each other
...
With a 2-‐cone system we could match any given
wavelength by adjusting th intensities of almost any two other wavelengths
...
This is not generally the case fr humans, except those that are colour blind
...
This is known as colour matching
...
Most use single wavlengths that appear red, green,
and blue
...
It is possible to match all the
colours in the visible spectrum by appropriate mixing of the three primary colours
...
The development of thrichromatic colour perception
made it possible for organisms to discriminate red from green
...
For example, the activation pattern for red + green presented together is the same as
for yellow presented alone
...
Long wavelength (peak 560-‐reddish)
Medium wavelength (peak 530 – greenish)
Short wavelength (peak 420 – blueish)
Blue light is often blurred as there are very few blue cones in the fovea ( they are more
sparse)
...
* Colour deficiency/Opponency/Constancy
Another common misconception is that red, yellow, and blue are primary colours, or that red,
green and blue are primary colours
...
If we are talking about the 3 different cone receptor types, we should
really refer to short, medium, and long wavelength-‐tuned photoreceptor types, rather than
colours
...
Colour deficiency is what many refer to as colour blindness
...
You still receive wavelengths but subjective
perception is deficient
...
d) Cerebral Achromatopsia – damage to the area V4 can cause complete “colour blindness”;
is a result of brain damage
...
The
majority of colour deficient people (dichromats and anomalous trichromats) can discriminate
many colours
...
Hering (1872) deised the theory of Colour Opponency
...
This explains
our everyday experience that some colours can be perceived simultaneously, suggesting that
cone photoreceptors are linked together, and activation of one member of the pair inhibits the
other
...
This led to the inference that we have two opponent
colour axes, red-‐green and blue-‐yellow
...
Cells have been found in LGN and V1 which show antagonistic cone inputs; R/G cells are
parvocellular, Blue/yellow cells are koniocellular, and B/W (luminance) are magnocellular
...
Some also show a centre-‐surround arrangement double-‐opponent cells
...
Colour constancy refers to the tendency for objects to generally look the same colour in a
wide variety of lighting conditions
...
Neurons demonstrating colour constancy are first found in area V4
...
Our conscious experience is the information integrated across all the senses
...
Temporal synchrony between
inputs from different sense modalities; information coming into the different senses from a
single event comes at the same time
...
Integrating visual and auditory signals in speech is important in noisy environments, in
hearing impaired people, for understanding unfamiliar language, and for building speech
recognition systems
...
The brain
infers that the visual and auditory information must be from the same source, and interprets
the input accordingly: the visual information influences what we hear
...
Crossmodal cueing refers to the tendency for irrelevant sounds to influence our visual
detection
...
We can conclude that irrelevant auditory stimuli can influence detection of a
visual target
...
Ventriloquist Effect is similar to the McGurk effect, but opposite
...
The ventriloquist effect shows that vision affects our auditory localisation; it
demonstrates the effect of temporal synchrony
...
The Ruberhand Illusion illustrates how visual-‐tactile integration changes perception of
touch and body ownership
...
Important principles of
multisensory integration include: spatial correspondence between inputs from different sense
modalities, and temporal synchrony between inputs from different sense modalities
...
* Mechanisms of multisensory processing
The standard model of multisensory processing dictates a hierarchical model
...
and then
there is subsequent (later/higher) processing by multisensory ‘convergence zones’
...
Single cell recordings are based on the theory that
particular neurons might be multi-‐sensory
...
These types of neurons are found in the
superior colliculus
...
So, we know that the initial processing in unimodal sensory cortex is followed by subsequent
processing in multisensory convergence zones
...
For example, our visual
cortex most of the time is receiving information from our visual system, however, sometimes
auditory information is received by our visual system
...
This leads us to conclude that
there is a highly interactive network that integrates information from the senses for conscious
perception, and it is not just a feedforward hierarchical view
...
Superior colliculus, parietal, temporal, and prefrontal
cortex) and are superadditive (respond between when both senses occur together)
...
* Cross-modal mappings and Synaesthesia
We all have a tendency to ‘map’ input across senses systematically
...
Taste is
associated with angularity of visual stimuli, and pitch of auditory stimuli
...
and odours are associated with colour, pitch, and tactile
softness
...
Mapping of the senses is implicit
...
It is involuntary in the sense that there is no behavioural reason
to shift attention to the sound (in fact, it impairs performance)
...
There are a number of situations in which this
subtle effect might be important, for example, flying a plane
...
Synaesthesia is a phenomenon that seems to build on similar mechanisms but gives
much larger effects
...
Synaesthesia is the union of the senses; it is when perception of a specific stimulus
induces a concurrent and distinct experience in a separate modality, or within the same
modality
...
Some types of synaesthesia are more common than
others, the least common are colours of pain or when there is no colour associated at all
...
Week 11/12 Development of Vision
How do we measure vision? There are a number of measures that are used to try and
determine how it is that infants develop vision
...
Although you
can focus attention on something you are not looking at, 99% of the time we are
focusing on what we are looking at
...
Babies show a strong interest in stripes
up until the age of 12 months
...
-‐ Indirect boredom measure: eg
...
This measures brain activity,
whether they are responding to a stimulus
...
-‐ Visual Evoked Potentials (VEPs): a measure of brain activity to determine
whether there is a systematic signal over and above the random noise, and therefore a
response to a stimulus
...
The first is spatial vision; the ability to
detect a stimulus over no stimulus
...
Contrast sensitivity function (CSF) also measured shows babies are poor at low to
medium spatial frequencies (represented by objects that tend to have large areas), but are ok
at high frequencies (fine detail)
...
o, which is a
relatively long time compared to most other species
...
Vernier acuity is the measure of
displacement of a line/grating, ie
...
Acuity and Vernier acuity do not
develop at the same rate
...
At about
20 weeks of age we become better at Vernier acuity that simple acuity
...
Changes in phase and
orientation detection are restricted to dynamic stimuli, that is, the stimuli must be moving
and not static
...
Prior studies have been done using real motion and are
considered more reliable
...
The odd-‐man out stimulus has one part of the stimuls moving in a different direction
to the other parts
...
If the baby has preferential looking for the odd
stimulus we can predict that they are detecting movement
...
Some
babies from a very early age (1 week) are able to detect looming stimuli; that is a moving
stimulus that is coming toward you
...
* Colour and depth/stereopsis
The one aspect of human vision that does seem to be almost innate is colour
...
Depth studied using checkerboard pattern on a glass plate that gives the illusion of a ‘drop’
...
* Face Perception
Faces appear unique in development discrimination
...
At about
10 months, what happens is perceptual narrowing; only the discriminations that we practice
persist
...
Environmental deprivation shows that raising a kitten in a
vertically oriented environment will not respond to horizontal stimuli
...
The timing of deprivation is vital; it must be from birth
...
If deprivation occurs during the critical period then you will interrupt development of
sensory systems
...
* Problems in Vision
Strabismus is where the two eyes fail to align and end up looking at different points in space,
meaning they perceive different images and develop abnormal binocular cells
...
If this
occurs during the critical period, then the other eye will never develop sharp image cells
lazy eye
...
This condition is called amblyopia (blunted vision)
...
So is proper visual input to both eyes in the critical period enough to give us perfect vision?
Active vs
...
Do we need to have active participation in our visual environment
or is enough to get both eyes working properly through the critical period and then we will
have perfect vision? There is evidence to suggest that there is an interaction between having
normal vision in the critical period and being active rather than passive
...
Study of kitten
carousel
...
What happens if we distort our environment, for example, by attaching prisms to glasses that
invert vision back to front (Stratton)
...
However, after an initial period, adaptation is possible
...
If however, you have recalibrated, the return to normal
world would be strange and you would need to readjust to the normal world again
...
It takes time again to recalibrate to the previously
normal world
...
b) Yellowing – lenses turn yellow, more blue light is absorbed and are less able to
distinguish between colours such as blue and black
...
Does
not lead to loss of visual acuity but can lead to detached retina; most prevalent in
short-‐sighted white males
...
Dry
AMD and wet AMD (blood vessels in retina leak); can be treated by wearing sunglasses,
quitting smoking, and eating brightly coloured vegetables
...
Originally, prior to operation, had used a stick and liked to use his hands
...
But, after a few days,
was able perceive the world without the use of touch
...
He
was not surprised by what he was able to see, and he could learn to read by sight but
recognised block letters and numbers and things he was familiar with through touch
...
He was able
to recognise animals at the zoo, and he is unable to complete a drawing of a bus concerning
the different parts he never experienced
...
It is common in patients who are blind and then see to develop depression and revert back to
making themselves blind
...
Road crossing
...
Week 12 Shapes and Object Perception
...
This model reflects the hierarchical nature of visual processing
...
Ventral (what?) and dorsal (where?) brain pathways
process more complex aspects of visual information (eg
...
Damage to earliest visual brain areas causes blindness (scotomas)
...
Colour, motion etc
...
Agnosia is damage to extra-‐striate cortex which has access to complete visual fields with
normal colour, depth and movement, but disordered object perception
...
Apperceptive – deficient shape representation, stage 2 disorder
...
2)
...
Patients can draw
what you ask them to but when you ask them what the picture is of they cannot identify it
...
Prosopagnosia individuals are unable to recognise faces
...
Proximity or nearness enables us to group what we see according
to closeness
...
When people do
coordinated group dances/movements this is what allows us to see shapes in the people)
...
In size or colour) will be grouped together
...
Directions can be a straight line or a curve
...
This principle is similar to the similarity principle except it works for moving elements
...
One of
the biggest challenges is to recognise objects across different viewpoints
...
View-‐independent theories have also been
devised by Marr & Nishihara which saw generalised cones as 3D, hierarchical (parts into
parts), and the axis is crucial
...
A view-‐dependant solution dictates that we have a multiple-‐view recognition (Tarr 1995)
where to recognise an object from different viewpoints we simply store how it looks from
many different viewpoints
...
Experience impacts object perception; the way objects are represented (processed) and the
brain areas recruited to process them
...
If you misalign the two foreign parts of a face (Hugh
Jackman and Simon Baker) breaks this phenomenon and we are better able to process the
parts
...
At a
neural level, we start to recruit the parts of the brain (fusiform face area) thought used only
for faces when dealing with expertise objects/categories
...
We seem to be really very good at face perception
...
We can also automatically perceive someone’s
race, gender, identity, emotion etc
...
People are predisposed to see faces even when there is no face
...
Recognition/recall tasks involve a learning phase where subject is exposed to a set of faces,
with instructions to remember them, followed by a retinal interval
...
In order to remember something, we have to be able to encode it first
(perception)
...
Studies that use the same image (for learning and recall phases in memory tasks, or for
matching stimuli in perceptual experiments) may not reveal anything about the details of
processing faces per se, just perception of faces
...
With images it can become about image matching rather than face
matching
...
The answer is yes, well probably
...
One of the things that make us think faces are special is that faces can
be recognised and discriminated by children at an extremely early age
...
Some infants can detect faces over “scrambled” faces within being
only 1 hour old
...
other women
...
When you jam different parts of faces
together, it makes a whole new face percept in your head and you process it as a whole new
face
...
Humans cannot avoid holistic processing of faces
...
The Part/Whole effect dictates that when we
only have parts of a face (such as a nose) we find it more difficult to identify the face
...
It affects faces more than it affects
any other object stimulus type
...
We do not engage all of the same processing as we do not recognise it as a face
anymore
...
With inversion, there is poorer matching and recognition for
inverted vs
...
When a face is turned upside down you no longer get the composite
face illusion, and you don’t get the whole part effect
...
, which has been shown in single unit studies in monkeys
...
Damage to the IT can cause prosopagnosia, or ‘face blindness’, a
selective impairment of visual face processing abilities
...
They simply cannot match, copy, or recognise faces
...
A common argument that faces aren’t in fact special is that it is expertise that is special, rather
than faces
...
Familiarity makes a massive difference to face perception
...
We can
easily identify familiar faces in pictures
...
Internal and external features can also affect recognition
...
can
change over time and so are unreliable at signalling identity
...
External
features are usually what makes people incorrectly identify a face theory behind being in
disguise
...
Witnesses are heavily
influenced by external features, which are easy to change, and hence eye witnesses are easily
fooled
...
are really only useful if you know the person
Title: Perception
Description: Topics covered include light, eye and brain structure and function, spatial vision, depth perception, motion perception, colour vision, multisensory processing, visual development, object perception, face perception.
Description: Topics covered include light, eye and brain structure and function, spatial vision, depth perception, motion perception, colour vision, multisensory processing, visual development, object perception, face perception.