Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
13 views12 pages

Syllabus Experiments

Cognitive Psychology studies human functions such as perception, memory, and decision making, utilizing various behavioral research methods including reaction time measurements and neural activity assessments. Reaction time (RT) is a key dependent variable, often analyzed alongside accuracy to understand the speed-accuracy trade-off in cognitive tasks. Additionally, experiments like the additional singleton task and signal detection theory explore attention allocation and decision-making under uncertainty.

Uploaded by

Randle Marsh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views12 pages

Syllabus Experiments

Cognitive Psychology studies human functions such as perception, memory, and decision making, utilizing various behavioral research methods including reaction time measurements and neural activity assessments. Reaction time (RT) is a key dependent variable, often analyzed alongside accuracy to understand the speed-accuracy trade-off in cognitive tasks. Additionally, experiments like the additional singleton task and signal detection theory explore attention allocation and decision-making under uncertainty.

Uploaded by

Randle Marsh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Introduction

Cognitive Psychology (which in the Netherlands is also called Function Theory


or Psychonomics is called) studies general human functions such as
perception (visual, auditory or tactile), language processing and production,
memory processes, learning processes, decision making, error processing,
judgments and reasoning and action processes.

A lot of behavioral research has been done recently, where people receive
stimuli offered and their manual responses (response time, accuracy,
estimates) are measured. The setups used are sometimes very simple (for
example a computer and a display), but can are also very complex and
expensive (for example the driving simulators at Dutch organization for
applied scientific research (TNO)).

In cognitive neuroscience, people are told to do a task and their neural


activity is measured with EEG or fMRI.

the behavioral measurements are then associated with neural processes.


Other people in cognitive psychology uses questionnaires like in safety
research in companies and hospitals; or from observations of behavior in
natural situations (for example in research into problems when using
complex equipment). Finally, there is also research in which insight into
human functions is gained through design (computer) models.

Research methods: Reaction time paradigms

Reaction Time as a Dependent Variable

Very important when setting up an experiment is the choice of your


dependent variable: what are you going to measure in terms of the
participant's behavior in the different experimental conditions? Various
methods are used within cognitive psychology dependent variables are used,
such as an estimate made by the participant(e.g. of the size or intensity of a
stimulus), the speed at which the participant responds, the percentage of
correct answers, or the properties of a physiological response, such as the
GSR (galvanic skin response), the pupil diameter, or the ERP (event-related
potential).
Reaction time (abbreviated: RT) is widely used, almost always in combination
with it percentage of errors made. The RT is defined as the time elapsed from
the beginning of presenting a stimulus (for example a picture on a screen or
the beginning of an auditory presented word) to the beginning of the
response (such as the pressing a button, starting to speak, the beginning of
an eye movement or the onset of a physiological response). The central
question is then whether the participants in some experimental conditions
respond faster or slower than in other conditions.

A lot of In addition to the instruction “respond as quickly as possible”,


researchers also give the instruction “and try not to make too many
mistakes”. So it is not said: “create no mistakes at all”. The reason is that as
a participant none at all makes mistakes, you are not sure that he or she will
really respond “as quickly as possible”.

Too many mistakes is of course not a good thing, because then you run the
risk of losing your participant an attempt to be as fast as possible but
whatever. Even then, possible errors disappear differences between your
experimental conditions.

Differences between Conditions

Researchers aren’t really interested in RT of a participant, and usually not in


the RT per trial. Most experiments have at least 20 trials per condition. The
researcher is interested in the average response time of a person and
whether the RTs differ across experimental conditions.

(There is sections on RTs test and some charts after this last part, the book
has them)

Problems Using Response Times


RTs differ greatly between people. A lot of things effect it, like them not
focusing all well on some parts, blinks at the moment the stimulus is
presented, swallows before response times are given. As well has being
distracted by a noise outside the examination room, etc. It is therefore
important to measure a large number of reaction times to obtain a reliable
result to obtain a score.

In practice, two methods are used to deal with this problem to take:

1. Instead of the average reaction time of a participant in a condition,

you use the median, which is much less sensitive to outliers.

2. You're throwing out extremely long response times. This can be done by
setting a fixed limit

choose (for example all times greater than two seconds) or a limit that per
participant and per experimental condition is determined. For example: you
deletes all reaction times that differ by more than three standard deviations
from the average of a participant in a condition

Problems in Interpreting Results: the 'Speed–Accuracy Trade-off'

Imagine that you want to investigate the influence of different types of music
is on performing a certain task, such as an attention task. The experiment
has one independent variable (Music), with two levels in this example:
classical music (A) and pop music (B). Remember that the independent
variable is the variable that is being manipulated. This variable is within
participants(within participants) manipulated: each participant does both
Condition A as Condition B. Performance (response time and accuracy) on
the attention task is the dependent variable. Remember that the dependent
variable is the variable that you measure. The tables below show two
possible outcomes experiment, in which the average reaction time (in
milliseconds) and the percentage of errors (“%error”, also known as “error
rates”) is given

(there are Tables 1 and 2 here with data points, refer to them, but im just
gonna leave this stuff in as is again)
The results in Table 2 are easy to interpret: in condition A, the participants
clearly shorter reaction times than in Condition B, to be precise: 96 ms less.
Statistical analysis of the data (in this case a paired t test because

every participant participated in both conditions) shows that the difference of


96 ms statistically significant at a significance level of 5% (α = .05).
Moreover he error rates show that there are also slightly more errors in
Condition B than in

Condition A were made. The conclusion is therefore clear: The task is in


Condition B,

listening to pop music, more difficult to perform than in Condition A, listening


to

classical music.

However, the results in Table 3 present a problem. The response times are

although shorter in Condition A than in Condition B, but the participants


make up for it

Condition A almost three times as many errors as in Condition B. The faster


response

Condition A apparently compromised accuracy. A conclusion

pulling is more difficult now. After all, you cannot say that you perform the
task in Condition

B is more difficult than Condition A. In other words, perhaps Conditions A and


B are

difficult, but the participants decided to do a little more in Condition A

to take risks; they have exchanged (higher) speed for (lower)

accuracy. This is called the speed-accuracy trade-off. The situation in Table 3

won't happen often, but it will be clear that in experiments in which you

measures response times, you should also always inspect error rates to
ensure a

interpretation in terms of a speed–accuracy trade-off


Experiment 1. Additional Singleton Task

Theory

Selective attention refers to our ability to concentrate on one event or piece


of information, while ignoring irrelevant information to block.
Like when you listen to an online lecture, you can focus your attention on
the reading while you watch your family talking in the other room. In other
words, you select relevant information while ignoring irrelevant information.
This type attention allocation is called voluntary or top-down attention.
However, there are some events that automatically get your attention, even
if you don't want this happens. For example, if one of your family members
suddenly screams, like the dog starts barking, or if the broadcast logo on
your screen suddenly starts to flicker. These events have in common that
they occur distinguished from their environment, they are 'salient'.

When your attention is attracted by a salient event, we call this involuntary


or bottom-up attention.

You voluntarily focus your attention on one task, but a salient one stimuli in
the environment can attract your attention involuntarily

We can explore bottom-up and top-down attention with the additional


singleton paradigm. The additional singleton paradigm is one example of a
visual search task. In visual search, it is used in addition to other stimuli to
target what is presented. The observer must reach the target as quickly as
possible find. A good example of it is searching for keys on a messy desk,
searching for a link on a site.

Searching studies with eye movements is called overt attention, but we can
also investigate visual search without eye movements, which is covert
attention is called. In the experiment you are going to conduct, you will be
asked keeping your eyes fixed on the fixation point in the center of the
screen while you are presented with a circular series of stimuli. This means
that you only cover yourself can shift attention to a location to find the
target. The target is your top down goal and we can measure reaction time
to detect shifts in attention index.

The longer the reaction time, the longer it took to get your attention target
and respond to it. In the additional singleton paradigm there is sometimes an
irrelevant but conspicuous distractor was presented. The salient distractor
has nothing to do with your goal, so basically you can do it to ignore. By
comparing reaction times when the distractor is present or absent, we can
infer whether the salient distractor has attracted attention or not.In CogLab
Experiment 2, you will repeat the original study by Theeuwes (1992) and

determine whether we can replicate the original findings.

Experiment 2. Signal detection

Theory

Signal detection theory (SDT) is an important theory for psychologists. SDT is


about making decisions under uncertainty. There are plenty of examples of
this.

Consider the decision whether or not to pass the car in front of you on a two-
lane road. The Decision if you feel a pain, to whether to go to your doctor or
the decision to say “yes” or “no” to a marriage proposal.

For now, let's take the following situation: you write a research proposal and
are looking for items you can use. You will find one in the “Web of Science”
database article that – judging by the summary – seems interesting, but not
complete certainty to you.

In other words, the better you are in the subject and the clearer the abstract
(the summary of the research), the better you will be able to make a relevant
article from an irrelevant article.

This aspect is called 'sensitivity' It is a measure of the distinction between


the 'signal' (relevant article) and 'the noise' (non-relevant article). The most
commonly used SDT measure for sensitivity is called d' (d prime). The d' tells
so how good you are at distinguishing relevant articles from irrelevant ones
articles, based on the abstract.

(it goes on to talk about some articles like you have them for some reason)

However, your decision whether or not to download and read the article
depends also eliminates the need to find an additional item. You only have a
little left found relevant material, you will quickly decide to print the article
and start reading. If you already have a whole pile of articles lying around,
you can decide that effort is too great. This aspect, your tendency to say
“yes”, becomes “criterion” or called 'bias'. The bias tells us how strong your
tendency is to decide whether there is one signal is (in this example to read
the article)

Sensitivity and criterion are two important concepts within SDT.If you were to
keep track of your decisions about the relevance of articles and we know
whether an article was indeed relevant, you could fill in a table as in Figure
2.1. 'Signal present' in our example means 'article relevant' and 'signal
absent' means 'article not relevant'. The responses “yes” and “no” represent
your decision whether or not to read the article.

Figure 2.1

Possible decisions based on whether or not the signal is present


(there is a table here)

The four cells in the table can then be interpreted as follows:

Hit: Article is relevant and you decided to read it.

Miss: Article is relevant, but you decided not to read it.

False alarm: Article is not relevant, but you decided to read it.

Correct Rejection: Article is not relevant and you have decided not to read.

(The translator is abit odd here and cant get it to work fully but this is what
it means in my words, I think.)

Within the SDT, the terms 'signal' and 'noise' are used. Imagine It means you
must determine a buzzing sound (the noise) during a party and that
becomes your name (the signal). You actually have to distinguish between
two situations:

1) Only the hum of background noise ('noise')

2) The buzz of background noise with your name in it ('noise +

signal')

It is believed that 'noise' alone is a 'internal response' which will cause


certain strength in your signal detection system. The example above at the
part was 'name recognition system'.
It is further assumed within the SDT that in that system the probability of a
reaction of a certain strength a normal distribution follows. Sometimes the
buzz (noise) gives a very weak response, sometimes a very strong one
strong reaction, but usually an 'average' reaction in that system

You can see the corresponding distribution (noise) as the yellow curve on the
left in Figure 2.2 depicted.

(its in this article)

On the horizontal axis is the strength of the reaction 'name recognition


system' (the 'internal response') is displayed. On the vertical axis the
probability of a response of a certain strength in the “name recognition
system”.

The average strength of the response is set to zero (these are z-scores, but
that is not very relevant to the story).

If you look at the left side of the distribution, you can see that the probability
of that is almost zero the name recognition will show a response strength of
more than 3 for alone noise. That is, if only background noise is present
(noise), the There is at least a chance that you think you hear your name
(signal).

The distribution of the signal, partially overlapping with the noise, becomes
shown as the curve in blue in Figure 2.2. If your name (the signal) is real is
spoken by one of those present at the reception, this increases the internal
'name recognition' response with a certain value (compare the position of the
yellow noise curve with the position of the blue signal + noise curve on the
horizontal axis).

Figure 2.2

Distribution of noise (yellow) and signal + noise (blue), with a criterion of


responses “yes”

and “no”
(its another table in the book so ill leave the next parts as is)
From the fact that the two distributions overlap you can conclude that a
perfect decision cannot always be made. An internal response of strength

For example, 1 on the horizontal axis can only be caused by noise, but can
also be caused by noise (background noise) and the signal (your name).

Whether you say "yes" or "no" to the question, whether you heard your
name when the internal response was 1, depends on your bias or criterion.
Fig 2.2 shows that criterion indicated by a red vertical line. This red line (the
criterion) is on the horizontal axis. To the right of the red line (your criterion)
you answer “yes” to it signal (in this case the internal response is about 0.85
or higher) and as the internal response is lower (to the left of the red line)
you say “no”. If you see the red line, and with that moves the criterion to the
left (i.e. lower on the horizontal axis), this means that you at a lower strength
of the internal response you will say 'yes': you are becoming more and more
inclined to say yes. If you move the line to the right (i.e. higher on the
horizontal axis), you say 'yes' less often: you become more careful about
saying 'yes'. The concepts of 'miss', 'false alarm', 'correct rejection' and 'hit'
are also used

To explain Figure 4.2: for the indicated criterion (with a value of approx 0.85)
if your name is indeed mentioned, you will have 90% hits (the part of the
signal + noise curve to the right of the criterion, in blue), and therefore 10%
misses (the part to the left of the criterion, in green). That means you are in
ninety percent of the cases in which your name is mentioned you decide that
it is indeed mentioned and you incorrectly say in ten percent that this was
not the case. Further for the specified criterion, if your name is not
mentioned, you have 20% false alarms (the part of the noise curve to the
right of the criterion, in orange), and so 80% correct rejections (the part to
the left of the criterion, in yellow). So in twenty percent of the time in which
your name is not mentioned, you wrongly think you are name and in eighty
percent of cases you rightly decide that this is not the case used to be.

The most important parameters within the SDT are sensitivity and criterion,
which you express

Figure 2.2 can be obtained in the following way:


- Sensitivity or d': the distance between the mean of the yellow distribution
of noise (left) and the blue signal + noise distribution (right), expressed in
units of standard deviation.

- Criterion: the position of the criterion on the horizontal axis.

(everything after this is all tables and more related to them, the best thing is
to just look at them yourself if needed)

To clarify the notion of sensitivity or ‘sensitivity’ in Figure 2.3 and 2.4 are
shown two more extreme situations. In the case of Figure 2.3 the signal is so
small that the noise and signal + noise distributions almost completely
overlap. So your sensitivity is close to 0. In the case of Figure 2.4 the signal is
so strong that you can optimally distinguish between noise and signal+noise.
You have at the specified criterion almost 100% hits, 100% correct rejections,
0% misses and 0% false alarms.

The “bias” – your tendency to say “yes” – is determined by the location of


the criterion. If the criterion is positioned on a high value (strong internal
response), you answer “conservative”, but you best be sure your answer is
‘yes; then.

When the criterion is low (weak internal response), you respond “liberally”
and are thus inclined to often say “yes”, even if the internal response is
weak. It depends on the expected costs and benefits of an unjustified ‘yes’
response (a ‘false alarm’).

Like how a witness in a murder case may be less likely to answer 'yes' when
he has to identify someone in a row (the so-called 'Oslo confrontation') to
avoid the risk of an innocent person going to prison. Witnesses will probably
be inclined to minimize the number of “false alarms”. A radiologist, however,
will probably decide on the basis of little evidence that there may be a tumor
in order to avoid the risk of a patient remaining untreated. The radiologist
will be inclined to minimize the number of ‘misses’.

It all varies from person to person really. One could be very conservative
(precautionary) and will therefore relatively often give a no-response. The
number of correct rejections and amount of misses, will be relatively high
with them.

Another person is very liberal and will relatively often give a yes answer. This
person will have a relatively high number of hits, as well as the number of
false alarms. Interestingly, this response pattern can be manipulated by
giving rewards. In this way, a more conservative or rather a more liberal
response pattern can be triggered.

The tables in Figure 2.5 and 2.6 illustrate this principle. (go look at them)

The distribution in Figure 2.5 provokes a more liberal response pattern. A


much higher reward is given for a hit (10 euros) than for a correct rejection
(1 euro).

In Fig 2.6, a conservative response pattern is encouraged, as a correct


rejection is rewarded higher than a hit. These effects can be reinforced by
increasing the differences in rewards for outcomes (hits, correct rejections,
misses and false alarms), or by imposing penalties that require you to pay
money in case of a wrong choice. Another factor that affects the response
matrix is the percentage of trials in which a signal is present or not. If a
signal is present in ninety percent of the cases, there are relatively many yes
responses, and the number of hits and false alarms is relatively high. If, on
the other hand, a signal is present only in ten per cent of the cases, then the
number of no responses is relatively high, and therefore the numbers of
correct rejections and misses.

In short: We make constant choices in all sorts of different situations.


According to the signal detection theory, not only the strength of the
stimulus (e.g. the sound of the whistle in a hearing test or the test score of a
psychiatric patient) is important for decision-making, but also psychological
factors, such as the tendency to make conservative or rather liberal choices.
Within the signal detection theory, the following applies: most decision-
making takes place under uncertainty. So: Whether one responds with “yes”
(a signal detected) or “no” (no signal detected) depends on the criteria of the
person concerned. Although SDT applies to many life choices (e.g. which
university you choose, whether you still know a word from the list of a
memory experiment, etc.), it is mainly used within

The psychophysics (the study of the relationship between stimulus and


sensation), such as whether a person has detected a sensory stimulus or not.

CogLab experiment 2 lets you know the presence or absence of a stimulus,


combined with your decision-making criteria on whether or not to detect a
signal are related.
Experiment 3. Simon Effect

Theory

The Simon paradigm (see textbook Chun & Most, 2022, p. 114) belongs to
the family of “conflict paradigms”, including “Stroop paradigm” and the
“Eriksen paradigm”. In all these irrelevant information affects the speed and
accuracy of response to the relevant information.

In these paradigms there are congruent stimuli, where the irrelevant


information is consistent with the correct response (e.g. the word ‘blue’
written in blue in the Stroop paradigm). Or incongruent stimuli, where the
irrelevant information is not consistent with the correct response (the word
‘blue’ written in red). A baseline condition can also be implemented in which
neutral irrelevant stimuli are presented that are not related to a possible
response. In conflict paradigms, interference (delayed response) is found in
incongruent trials compared to neutral and congruent Trials.

You might also like