Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
29 views143 pages

Notes For Sem 1

Cognitive psychology studies how people acquire, store, transform, and communicate information, with roots tracing back to philosophers like Aristotle and developments in various psychological schools. Significant milestones include the establishment of psychology as a science by Wilhelm Wundt, the rise of behaviorism, and the cognitive revolution that emphasized mental processes. The field has evolved through interdisciplinary collaborations, integrating insights from neuroscience, artificial intelligence, and cognitive science, while employing models like information processing and connectionism to understand cognitive functions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views143 pages

Notes For Sem 1

Cognitive psychology studies how people acquire, store, transform, and communicate information, with roots tracing back to philosophers like Aristotle and developments in various psychological schools. Significant milestones include the establishment of psychology as a science by Wilhelm Wundt, the rise of behaviorism, and the cognitive revolution that emphasized mental processes. The field has evolved through interdisciplinary collaborations, integrating insights from neuroscience, artificial intelligence, and cognitive science, while employing models like information processing and connectionism to understand cognitive functions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 143

COGNITIVE PSYCHOLOGY

MODULE 1: INTRODUCTION TO COGNITIVE PSYCHOLOGY

UNIT 1: HISTORY AND EMERGENCE OF COGNITIVE PSYCHOLOGY

Cognitive psychology is that branch of psychology concerned with how people acquire,
store, transform, use, and communicate information (Neisser, 1967). It deals with mental
life. Cognition is the mental activity which describes the acquisition, storage, transformation
and use of knowledge.

The Greek philosopher Aristotle (384–322 BC) examined topics such as perception, memory,
and mental imagery. He also discussed how humans acquire knowledge through experience
and observation (Barnes, 2004; Sternberg, 1999). Aristotle emphasized the importance of
empirical evidence, or scientific evidence obtained by careful observation and
experimentation. His emphasis on empirical evidence and many of the topics he studied are
consistent with twenty-first-century cognitive psychology.

Aristotle’s concept was supported by a number of philosophers but opposed by other


including his student Plato. As a result, two concepts on the nature of mind and knowledge
was formed, viz,

NATURE OF
MIND AND
KNOWLEDGE

EMPIRICISM NATIVISM

Empiricism is the view that knowledge comes from an individual’s own experience—ie, from
the empirical information that people collect from their senses and experiences. Empiricists
recognize individual differences in genetics but emphasize human nature’s malleable, or
changeable, aspects. Empiricists believe people are the way they are, and have the
capabilities they have, largely because of previous learning. One mechanism by which such
learning is thought to take place is through the mental association of two ideas. Aristotle,
George Berkeley, David Hume, and later, James Mill supported this view.

Nativism, by contrast, emphasizes the role of constitutional factors—of native ability—over


the role of learning in the acquisition of abilities and tendencies. Nativists attribute
differences in individuals’ abilities less to differences in learning than to differences in
original, biologically endowed capacities

and abilities. Nativists often suggest that some cognitive functions come built in, as part of
our legacy as human beings. Plato, Rene Descartes and Immanuel Kant supported this view.

A major advancement to the rise of cognitive psychology was set by Wilhelm Wundt in 1879
when he constructed the first laboratory of psychology at Leipzig, Germany to study the
science of mind. He also promoted to the development of the structuralist school of
psychology. He wanted to discover the laws and principles that explained our immediate
conscious experience. In particular, Wundt thought any conscious thought or idea resulted
from a combination of sensations that could be defined in terms of exactly four properties:
mode (for example, visual, auditory, tactile, olfactory), quality (such as colour, shape,
texture), intensity, and duration. Wundt proposed that psychology should study mental
processes, using a technique called introspection. Wundt is a pioneer in the study of many
cognitive phenomena, he was the first to approach cognitive questions scientifically and the
first to try to design experiments to test cognitive theories.

Another important German psychologist, named Hermann Ebbinghaus (1850–1909),


focused on factors that influence human memory. He constructed more than 2,000
nonsense syllables (for instance, DAK) and tested his own ability to learn these stimuli.
Meanwhile, in the United States, similar research was being conducted by psychologists
such as Mary Whiton Calkins (1863 -1930). For example, Calkins reported a memory
phenomenon called the recency effect (Madigan & O’Hara, 1992). The recency effect refers
to the observation that our recall is especially accurate for the final items in a series of
stimuli.

Another crucial figure in the history of cognitive psychology was an American named
William James. William James considered the subject matter of psychology to be our
experience of external objects. He assumed that the way the mind works has a great deal to
do with its function—the purposes of its various operations. Hence the term functionalism
was applied to his approach. Fellow functionalists such as John Dewey and Edward L.
Thorndike, for example, shared James’s conviction that the most important thing the mind
did was to let the individual adapt to her or his environment.

Further, a radical turn happened with the advent of behaviourism in 20 th century. According
to the behaviourist approach, psychology must focus on objective, observable reactions to
stimuli in the environment (Pear, 2001). Behaviourism contributed significantly to
contemporary cognitive psychology research methods. Eminent behaviourists like Edward
Tolman who explained about latent learning and cognitive map and Wolfgang Kohler
elucidated about the process of insight learning.

Another important event in 1932 occurred when Sir Frederick Bartlett rejected the popular
view at the time that memory and forgetting can be studied by means of nonsense syllables,
as had been advocated by Ebbinghaus during the previous century. He explained about
reconstructive memory and schema.

An additional school that facilitated the development of the field of cognitive psychology
was Gestalt psychology. Gestalt psychologists, who studied mainly perception and problem
solving, believed an observer did not construct a coherent perception from simple,
elementary sensory aspects of an experience but instead apprehended the total structure of
an experience as a whole. In other words, they believed that whole is more than sum of its
parts. They value the unity of psychological phenomena. Gestalt psychologists provided the
framework for the concept of problem solving in psychology.

Later, the concept of genetic epistemology propounded by Jean Piaget helped to shape
modern cognitive psychology. Piaget sought to describe the intellectual structures
underlying cognitive experience at different developmental points through an approach he
called genetic epistemology. Piaget’s observations of infants and children convinced him
that a child’s intellectual structures differ qualitatively from those of a mature adult. He
substantiated about some of the most important terms in cognitive psychology like mental
schema, accommodation, assimilation and organization. Piaget believed that children in
different stages of cognitive development used different mental structures to perceive,
remember, and think about the world.

The investigations into individual differences in human cognitive abilities by Sir Francis
Galton and his followers is another milestone in the emergence of cognitive psychology.
Galton (1883/1907) studied a variety of cognitive abilities, in each case focusing on ways of
measuring the ability and then noting its variation among different individuals. Among the
abilities he studied (in both laboratory and naturalistic settings) was mental imagery. His
invention of tests and questionnaires to assess mental abilities inspired later cognitive
psychologists to develop similar measures.

Despite the early attempts to define and study mental life, psychology, especially American
psychology, came to embrace the behaviourist tradition in the first five decades of the
1900s. A number of historical trends, both within and outside academia, came together in
the years during and following World War II to produce what many psychologists think of as
a “revolution” in the field of cognitive psychology. This cognitive revolution, a new series of
psychological investigations, was mainly a rejection of the behaviourist assumption that
mental events and states were beyond the realm of scientific study or that mental
representations did not exist. The major by-product of this revolution is the person–
machine system. It is the idea that machinery operated by a person must be designed to
interact with the operator’s physical, cognitive, and motivational capacities and limitations.

At about the same time, developments in the field of linguistics, the study of language,
made clear that people routinely process enormously complex information. Work by linguist
Noam Chomsky revolutionized the field of linguistics, and both linguists and psychologists
began to see the central importance of studying how people acquire, understand, and
produce language.

Another strand of the cognitive revolution came from developments in neuroscience, the
study of the brain-based underpinnings of psychological and behavioural functions. A major
debate in the neuroscience community had been going on for centuries, all the way back to
Descartes, over the issue of localization of function.

Thus, is the history and emergence of cognitive psychology.

UNIT 2: COGNITIVE PSYCHOLOGY- AN INTERDISCIPLINARY FIELD

Cognitive psychology has had an enormous influence on the discipline of psychology. Most
cognitive psychologists prior to the 1980s did indeed conduct research in artificial
laboratory environments, often using tasks that differed from daily cognitive activities. A by-
product of the cognitive revolution is that scholars from categorically discreet areas—
including linguistics, computer science, developmental psychology, and cognitive
psychology—got together and focused on their common interests such as the structure and
process of cognitive abilities. Collectively, these individuals created a united front to defeat
behaviourism. The various interdisciplinary fields in cognitive psychology are elaborated
below.
Cognitive neuroscience

Cognitive neuroscience combines the research techniques of cognitive psychology with


various methods of assessing the structure and function of the brain. Researchers have
discovered which structures in the brain are activated when people perform a variety of
cognitive tasks. Psychologists now use neuroscience techniques to explore the kind of
cognitive processes that we use in our interactions with other people. This is a new
discipline called social cognitive neuroscience. Cognitive psychologists use various
neuroscience techniques like brain lesions, Positron Emission Tomography, Functional
Magnetic Resonance Imaging, Event-related Potential Technique and Single-celled
Recording Technique.

Artificial intelligence

Artificial intelligence (AI), a branch of computer science, seeks to explore human cognitive
processes by creating computer models that accomplish the same tasks that humans do
(Boden, 2004; Chrisley, 2004). Researchers in artificial intelligence have tried to explain how
you recognize a face, create a mental image, write a poem, as well as hundreds of additional
cognitive accomplishments (Boden, 2004; Farah, 2004; Thagard, 2005). Cognitive
psychologists use various artificial intelligence techniques in their researches like computer
metaphor, pure AI, computer stimulation and the parallel distributed processing approach.

Cognitive science

Cognitive psychology is part of a broader field known as cognitive science. Cognitive science
is a contemporary field that tries to answer questions about the mind. As a result, cognitive
science includes three disciplines we’ve discussed so far—cognitive psychology,
neuroscience, and artificial intelligence. It also includes philosophy, linguistics,
anthropology, sociology, and economics (Sobel, 2001; Thagard, 2005). It is a field that have
contributed most to cognitive psychology.

UNIT 3: CONTRIBUTIONS OF VARIOUS SCHOOLS OF PSYCHOLOGY TO COGNITIVE


PSYCHOLOGY (brief)
Structuralism

Structuralists defined psychology as the science that studies about the conscious
experience. The major contribution of structuralism to cognitive psychology is the technique
of introspection. Introspection (objective introspection) is the process of examining and
measuring one’s own thoughts and mental activities. Wundt was influenced by John Locke’s
view (tabula rasa).

Functionalism

William James developed the first psychological laboratory in America at Harvard University
(1890) and the school of functionalism. James considered the subject matter of psychology
to be our experience of external objects. Perhaps James’s most direct link with modern
cognitive psychology is his view of memory, which comprises structure and process of
memory.

Behaviourism

Skinner believed in the existence of images, thoughts and the like and agreed they were
proper objects of study, but he objected to treating mental events and activities as
fundamentally different from behavioural events and activities. Functional analysis was one
technique introduced by Skinner to analyse the relationship between the stimuli and
behaviours. Behaviourism has important contributions towards the field of cognitive
psychology. The concept of cognitive map and latent learning developed by Edward Tolman
has widened the field of cognitive psychology.

Gestalts

Gestalt ideas are part of the study of cognitive psychology. Gestalts believe that whole is
more than sum of its parts. They studied mainly perception and problem-solving which are
some the main concepts in cognitive psychology. This school has also contributed to
concepts of learning, memory and thought-processes. The Gestalt psychologists rejected
structuralism, functionalism, and behaviourism as offering incomplete accounts of
psychological and, in particular, cognitive experiences. They believed that the mind imposes
its own structure and organization on stimuli and, in particular, organizes perceptions into
wholes rather than discrete parts.

Humanism

The school of humanism has contributed roughly to the field of cognitive psychology. The
qualities of creativity, competence and problem-solving abilities of human beings are
elucidated in this principle.

UNIT 4: INTRODUCTION TO MODELS OF COGNITIVE PSYCHOLOGY: INFORMATION


PROCESSING, CONNECTIONISM.

Information Processing Approach

During the 1950s, communication science and computer science began to develop and gain
popularity. Researchers then began speculating that human thought processes could be
analysed from a similar perspective (Leahey, 2003; MacKay, 2004). Two important
components of the information-processing approach are that (a) a mental process can be
compared with the operations of a computer, and (b) a mental process can be interpreted
as information progressing through the system in a series of stages, one step at a time.

Central to the information processing approach is the idea that cognition can be thought of
as information passing through a system. Here, brain is considered as the hardware and
cognitive processes as the software. Information-processing approach has four major
assumption:

1. Humans as symbol manipulators.


Information-processing theorists assume that people are general-purpose symbol
manipulators ie., they can perform astonishing cognitive acts by applying only a few
mental operations to symbols. Information is then stored symbolically, and the way
it is coded and stored greatly affects how easy it is to use it later.
2. Data- information from environment and processes.
Memory stores where information is held for possible later use and the different
processes that operate on the information at different points or that transfer it from
store to store is considered here. Recognition, detection, recoding and retrieval are
some processes in human memory storage.
3. Human thought- system of interrelated capabilities.
Different individuals have different cognitive capacities- different attention spans,
memory capacities and language skills, to name a few. Information-processing
theorists try to find the relationships between these capacities to explain how
individuals go about performing specific cognitive tasks.
4. Humans as active information seekers and scanners.
Information is stored in the memory of humans in three different ways. Sensory
memory is a memory system that retains representations of sensory input for brief
periods of time. Short-term memory is the memory system that holds information
we are processing at the moment. Our third memory system, long-term memory
allows us to retain vast amounts of information for very long periods of time.

This approach is rooted in structuralism. They hold that information-processing is


sequential. They rely on computer metaphor. Finally, one of the major information-
processing approach models is the Atkinson-Shiffrin model or Modal model of memory.

Connectionist Approach

The connectionist approach or parallel distributed processing (PDP) is another paradigm of


cognitive psychology. Connectionism depicts cognition as a network of connections among
simpler processing units (McClelland, 1988). Connectionism seeks to replace the computer
metaphor of the information-processing framework with a brain metaphor. Connectionist
approach assume that cognitive processes occur in parallel ie., many at the same time. The
connectionist approach is also called neural network model because the processing units
are sometimes compared to neurons.

Each unit is connected to other units in a large network. Each unit has some level of
activation at any particular moment in time. The exact level of activation depends on the
input to that unit from both the environment and other units to which it is connected.
Connections between two units have weights, which can be positive or negative. A
positively weighted connection causes one unit to excite, or raise the level of activation of
units to which it is connected; a negatively weighted connection has the opposite effect,
inhibiting or lowering the activation of connected units.

The connectionist framework allows for a wide variety of models that can vary in the
number of units hypothesized, number and pattern of connections among units, and
connection of units to the environment. All connectionist models share the assumption,
however, that there is no need to hypothesize a central processor that directs the flow of
information from one process or storage area to another. Instead, different patterns of
activation account for the various cognitive processes (Dawson, 1998). Knowledge is not
stored in various storehouses but within connections between units. Learning occurs when
new connective patterns are established that change the weights of connections between
units.

Connectionist approach has some major assumptions:

1. Cognitive system is made up of billions of interconnected nodes/neurons that form


together the complex networks.
2. Nodes within a network can be activated and the pattern of activation corresponds
conscious experience. Stronger the connection, activation takes place
simultaneously.
3. Networks operate in parallel.
4. Processing of a single task is distributed throughout the brain and not in one specific
location.
5. Connections between two nodes is modelled on the way neurons interact. The effect
of one neuron on the other can be excitatory or inhibitory.
6. Connectionist approach hypothesizes that knowledge is not stored in various
storehouses but within connections between units. Learning occurs when new
connections are established.
The fundamental premise of connectionism is that individual neurons do not transmit large
amounts of symbolic information. Instead they compute by being appropriately connected
to large numbers of similar

units. This is in sharp contrast to the conventional computer model of intelligence. Feldman
and Ballard (1982), in an early description of connectionism, argued that this approach is
more consistent with the way the brain functions than an information-processing approach.

Comparison

Information- Processing model Connectionist approach


• Computer metaphor • Brain metaphor
• Drawn from structuralism • Drawn from functionalism
• Serial processing • Parallel processing
• Explains cognition at symbolic • Explains cognition at subsymbolic level
level • Based on cognitive neuropsychology and
• Based on computer science cognitive neuroscience
• Assumptions • Assumptions
• Presence of a central processor • No central processor, only weighted
like brain. connections.
• Compared to computer system. • Compared to nervous system.
• Cognition is thought of as • Cognition as a network of connections
information passing through a among simpler processing units.
system.

UNIT 5: LIMITATIONS OF LABORATORY STUDIES AND IMPORTANCE OF ECOLOGICAL


VALIDITY

Many experiments fail to fully capture real-world phenomena in the experimental task or
research design. The laboratory setting or the artificiality or formality of the task may
prevent research participants from behaving normally. The kinds of tasks amenable to
experimental study may not be those most important or most common in everyday life.
Experiments sometimes risk studying phenomena that relate only to people’s real-world
experience

Observational studies have the advantage that the things studied occur in the real world
and not just in an experimental laboratory. Psychologists call this property ecological
validity. The ecological approach propounded by J. J. Gibson (1979) holds that all cognitive
activities are shaped by the culture and by the context in which they occur. This tradition
relies less on laboratory experiments or computer stimulations and more on naturalistic
observation and field studies to explore cognition. Gibson held that perception consists of
the direct acquisition of information from the environment. This makes it necessary to study
cognitive activities in a natural setting.
MODULE 2: ATTENTION

UNIT 1: MODEL OF ATTENTION: FUNCTIONS OF EXECUTIVE, PRECONSCIOUS AND


CONSCIOUS PROCESSING, ALERTING MECHANISM. (IPA MODEL)

Attention is a concentration of mental activity / that allows you to take in a limited portion
of the vast stream of information / available from both your sensory world and your
memory (Fernandez-Duque & Johnson, 2002; Styles, 2006; Ward, 2004). In other words,
attention is the concentration of mental effort on sensory or mental events. Many
contemporary ideas about attention are based on the premise that an information-
processing system’s capacity to handle the flow of input is determined by the limitations of
that system.

Alerting Mechanism

Several regions of the brain are responsible for attention, including some structures that are
below the surface of the cerebral cortex (Just et al., 2001; Posner & Rothbart, 2007b).
Michael Posner and Mary Rothbart (2007a, 2007b) propose that three systems in the cortex
manage different aspects of attention:

(1) the orienting attention network,

(2) the executive attention network, and

(3) the alerting attention network.

This third system, the alerting attention network, is responsible for making you sensitive
and alert to new stimuli; it also helps to keep you alert and vigilant for long periods of time
(Posner & Rothbart, 2007a, 2007b).

UNIT 2: SELECTIVE ATTENTION: FEATURES OF BOTTOM-UP AND TOP-DOWN PROCESSING


The term selective attention refers to the fact that we usually focus our attention on one or
a few tasks or events rather than on many. In other words, selective attention is concerned
with the selection of a large number of stimuli into our awareness at any given time.
Selective attention has two main aspects, viz, top-down and bottom-up processing.

The term bottom-up (or data-driven) processing essentially means that the perceiver starts
with small bits of information from the environment that he combines in various ways to
form a percept. Here, attention is given only to the information in the distal stimulus.
Bottom-up processing emphasizes the importance of the stimulus in object recognition.
Specifically, the physical stimuli from the environment are registered on the sensory
receptors. The combination of simple, bottom-level features allows us to recognize more
complex, whole objects. In other words, bottom-up processing is the perception that
consists of the progression of recognizing / and processing information from individual
components of a stimuli / and moving to the perception of the whole. Context effects and
expectation effects are the two limitations of bottom-up processing.

In top-down (also called theory-driven or conceptually driven) processing, the perceiver’s


expectations, theories, or concepts guide the selection and combination of the information
in the pattern recognition process. They are directed by expectations derived from context
or past learning or both. Our expectations at the higher (or top) level of visual processing
will work their way down and guide our early processing of the visual stimulus in top-down
processing. Top-down processing is especially strong when stimuli are incomplete or
ambiguous. Top-down processing is also strong when a stimulus is registered for just a
fraction of a second. In simple words, we often proceed in accordance with what our past
experience tells us to expect, and therefore we don’t always analyse every feature of most
stimuli we encounter. This is known as top-down processing.

Top-down and bottom-up processing occur simultaneously and interact with each other.

Bottom-up processing Top-down processing


• process stimuli on the basis of
• process fundamental
expectation and past
characteristics of a stimuli.
experiences.
• aka., data driven processing
• aka., theory driven or
conceptually driven processing.

UNIT 3: AUTOMATICITY, MULTI TASKING AND DIVISION OF ATTENTION.

Automaticity, multi-tasking and division of attention are the phenomena that take place
through automatic processing.

Highly practiced activities become automatic and thereby require less attention to perform
than do new or slightly practiced activities. This is the basis of automatic processing. Posner
and Snyder (1975) offered three criteria for cognitive processing to be called automatic
processing:

(1) It must occur without intention;

(2) it must occur without involving conscious awareness; and

(3) it must not interfere with other mental activity.

For example, while driving a person has to drive, watch out the traffic rules and talk to the
companion. Schneider and Shiffrin (1977) examined automatic processing of information

under well-controlled laboratory conditions. They asked participants to search for certain
targets, either letters or numbers, in different arrays of letters or numbers, called frames.
For example, a participant might be asked to search for the target J in an array of letters: B
M J K T. Previous work had suggested that when people search for targets of one type (such
as numbers) in an array of a different type (such as letters), the task is easy. Numbers
against a background of letters seem to “pop out” automatically. In fact, the number of
nontarget characters in an array, called distractors, makes little difference if the distractors
are of a different type from the targets. So, finding a J among the stimuli 1, 6, 3, J, 2 should
be about as easy as finding a J among the stimuli 1, J, 3. Finding a specific letter against a
background of other letters seems much harder.

Schneider and Shiffrin (1977) had two conditions in their experiment. In the varied-mapping
condition, the set of target letters or numbers, called the memory set, consisted of one or
more letters or numbers, and the stimuli in each frame were also letters or numbers.
Targets in one trial could become distractors in subsequent trials. In the consistent-mapping
condition, the target memory set consisted of numbers and the frame consisted of letters,
or vice versa. Stimuli that were targets in one trial were never distractors in other trials. The
task in this condition was expected to require less capacity. In addition, Schneider and
Shiffrin (1977) varied three other factors to manipulate the attentional demands of the task.
They were frame size (the number of letters and numbers presented in each display), frame
time (the length of time each array was displayed) and memory set (the number of targets
the participant was asked to look for in each trial).

The results showed that in the consistent-mapping condition, participants’ performance


varied only with the frame time, not with frame size and memory set. In the varied-mapping
condition, participants’ performance in detecting the target depended on all three variables:
frame size, frame time and memory set.

Thus, Schneider and Shiffrin (1977) explained that Automatic processing, they asserted, is
used for easy tasks and with familiar items. It operates in parallel (meaning it can operate
simultaneously with other processes) and does not strain capacity limitations.

Note: Attending to more than one act at a time is known as division of attention. Multi-
tasking is an example of division of attention.

[Refer controlled processing in Galotti]

Automatic Processing Controlled Processing


• parallel process • serial process
• not intentional • requires attention
• without conscious awareness • under conscious control
• does not interfere mental activity • mental capacity-limited
• used for easy and familiar tasks • used for difficult and unfamiliar
tasks
• Similarity: divided attention • Similarity: divided attention

UNIT 4: MAJOR CONCEPTS IN ATTENTION- BOTTLE NECK & SPOTLIGHT CONCEPTS, EARLY
AND LATE SELECTION.

Bottle neck model

Early concepts in attention emphasized that people are extremely limited in the amount of
information that they can process at any given time. Bottleneck theories proposed a similar
narrow passageway in human information processing. In other words, this bottleneck limits
the quantity of information to which we can pay attention. Thus, when one message is
currently flowing through a bottleneck, the other messages must be left behind. Therefore,
if the amount of information available at any given time exceeds capacity, the person uses
an attentional filter to let some information through and block the rest. Only material that
gets past the filter can be analyzed later for meaning. Researchers proposed many
variations of this bottleneck theory prominent being Broadbent’s filter model (1958) and
Treisman’s attenuation theory (1964). This model held that attention is a serial process.

Moray (1959) discovered one of the most famous, called the “cocktail party effect”.

Spotlight Approach
Attention is a spotlight that highlights whatever information the system is currently focused
on (Johnson & Dark, 1986). Accordingly, psychologists are concerned less with determining
what information can’t be processed (bottle neck metaphor) than with studying what kinds
of information people choose to focus on (spotlight metaphor).

Spotlight approach holds that attention can be moved from one area to another, ie., can be
directed and redirected to various kinds of incoming information. Attention, like a spotlight,
has fuzzy boundaries.

[Refer Galotti]

Early and Late Selection Theory

Deutsch and Deutsch (1963) proposed a theory, called the late-selection theory. Later
elaborated and extended by Norman (1968), this theory holds that all messages are
routinely processed for at least some aspects of meaning—that selection of which message
to respond to this happens “late” in processing. In late-selection theory, recognition of
familiar objects proceeds unselectively and without any capacity limitations. Note that
filter theory hypothesizes a bottleneck—a point at which the processes a person can bring
to bear on information are greatly limited—at the filter. Late-selection theory also describes
a bottleneck but locates it later in the processing, after certain aspects of the meaning have
been extracted. All material is processed up to this point, and information judged to be
most “important” is elaborated more fully. This elaborated material is more likely to be
retained; unelaborated material is forgotten. A message’s “importance” depends on many
factors like context, personal significance of certain kinds of content and level of alertness.
At low levels of alertness only very important messages capture attention. At higher levels
of alertness, less important messages can be processed.

Pashler (1998) argues that the bulk of the evidence suggests it is undeniably true that
information in the unattended channel sometimes receives some processing for meaning.
At the same time, it appears true that most results thought to demonstrate late selection
could be explained in terms of either attentional lapses (to the attended message) or special
cases of particularly salient or important stimuli. In any event, it seems unlikely that
unattended messages are processed for meaning to the same degree as are attended
messages.

UNIT 5: THEORIES OF ATTENTION FILTER MODEL-BROADBENT, ATTENUATION THEORY-


TREISMAN, MULTIMODE THEORY-JOHNSTON &HAINZ, RESOURCE & CAPACITY ALLOCATION
MODEL-KAHNEMAN, SCHEMA THEORY-NEISSER.

Broadbent’s Filter Model

Donald Broadbent (1958) proposed the filter model of attention. He states that there are
limits on how much information a person can attended to at any given time. Therefore, if
the amount of information available at any given time exceeds capacity, the person uses an
attentional filter to let some information through and block the rest. Only material that gets
past the filter can be analysed later for meaning. It is a single-channel theory, is based on
the idea that information processing is restricted by channel capacity. Broadbent argued
that messages traveling along a specific nerve can differ either;

(a) according to which of the nerve fibres they stimulate or

(b) according to the number of nerve impulses they produce.

Thus, when several nerve fibres fire at the same time, several sensory messages may arrive
at the brain simultaneously. In Broadbent’s model these would be processed through a
number of parallel sensory channels. Further processing of information would then occur
only after the signal was attended to and passed on through a selective filter into a limited-
capacity channel. Broadbent (1958) believed that what is limited is the amount of
information we can process at any given time. Two messages that contain little information,
or that present information slowly, can be processed simultaneously. Broadbent postulated
that, in order to avoid an overload in this system, the selective filter could be switched to
any of the sensory channels. In an early experiment, Broadbent (1954) used the dichotic
listening task to test his theory.

Other investigators soon reported results that contradicted filter theory. Moray (1959)
discovered one of the most famous, called the “cocktail party effect”: Shadowing
performance is disrupted when one’s own name is embedded in either the attended or the
unattended message.

Filter theory predicts that all unattended messages will be filtered out—that is, not

processed for recognition or meaning—which is why participants in dichotic listening tasks


can recall little information about such messages. The cocktail party effect shows something
completely different: People sometimes do hear their own name in an unattended message
or conversation, and hearing their name will cause them to switch their attention to the
previously unattended message. Moray (1959) concluded that only “important” material
can penetrate the filter set up to block unattended messages. Presumably, messages such
as those containing a person’s name are important enough to get through the filter and be
analyzed for meaning.

Treisman (1960) discovered a phenomenon that argues against this alternative


interpretation of the cocktail party effect. She played participants two messages, each
presented to a different ear, and asked the participants to shadow one of them. At a certain
point in the middle of the messages, the content of the first message and the second
message was switched so that the second continued the first and vice versa. Immediately
after the two messages “switched ears,” many participants repeated one or two words from
the “unattended ear.” In the example shown, for instance, a participant shadowing message
1 might say, “At long last they came to a fork in the road but did not know which way to go.
The trees on the left side of refers to the relationships . . . ,” with the italicized words
following the meaning of the first part of message 1 but coming from the unattended
channel (because they come after the switch point). If participants processed the
unattended message only when their attentional filter “lapsed,” it would be very difficult to
explain why these lapses always occurred at the point when the messages switched ears. To
explain this result, Treisman reasoned that participants must be basing their selection of
which message to attend to at least in part on the meaning of the message— a possibility
that filter theory does not allow for.

The issue of whether information from the unattended channel can be recognized was
taken up by Wood and Cowan (1995). They showed that the attentional shift to the
unattended message was unintentional and completed without awareness. Indeed, A. R. A.
Conway, Cowan, and Bunting (2001) showed that a lower working-memory capacity means
less ability to actively block the unattended message. In other words, people with low
working-memory spans are less able to focus. Given her research findings, psychologist
Anne Treisman (1960) proposed a modified filter theory, one she called attenuation theory.

Treisman’s Attenuation Theory

Anne Treisman (1960) proposed a modified filter theory, called attenuation theory. She held
that some meaningful information in unattended messages might still be available, even if
hard to recover. Incoming messages are subjected to three kinds of analysis. In the first, the
message’s physical properties, such as pitch or loudness, are analyzed. The second analysis
is linguistic, a process of parsing the message into syllables and words. The third kind of
analysis is semantic, processing the meaning of the message.

Some meaningful units (such as words or phrases) tend to be processed quite easily. Words
that have subjective importance (such as your name) or that signal danger (“Fire!” “Watch
out!”) have permanently lowered thresholds; that is, they are recognizable even at low
volumes. You might have noticed yourself that it is hard to hear something whispered
behind you, although you might recognize your name in whatever is being whispered.
Words or phrases with permanently lowered thresholds require little mental effort by the
hearer to be recognized. Thus, according to Treisman’s theory, the participants in Moray’s
experiments heard their names because recognizing their names required little mental
effort.

Only a few words have permanently lowered thresholds. However, the context of a word in
a message can temporarily lower its threshold. But the primed stimuli ie., especially which is
ready to be recognized even if were to occur in the unattended channel, little effort would
be needed to hear and process it (Little effort is need to hear and process a primed stimuli).

MacKay (1973) showed that we might assume that at least some meaningful aspects of the
unattended message are processed. Pashler (1998), however, noted that the effect
reported by MacKay (1973) is greatly diminished if the message on the unattended channel
consists of a series of words instead of just one. This raises the possibility that if the
unattended message consists of one word only, the physical sound of that word temporarily
disrupts the attention being paid to the attended message, thus perhaps briefly “resetting”
the attentional filter.

According to Treisman (1964), people process only as much as is necessary to separate the
attended from the unattended message. If the two messages differ in physical
characteristics, then we process both messages only to this level (attenuating filter level)
and easily reject the unattended message. If the two messages differ only semantically, we
process both through the level of meaning (hierarchy of analysers) and select which
message to attend to base on this analysis. Processing for meaning takes more effort,
however, so we do this kind of analysis only when necessary. Messages not attended to are
not completely blocked but rather weakened in much the way that turning down the
volume weakens an audio signal from a stereo. Parts of the message with permanently
lowered thresholds (“significant” stimuli) can still be recovered, even from an unattended
message.

Broadbent’s Filter Theory Treisman’s Attenuation Theory

• Allows only one kind of • Allows many different kinds


analysis. [based on physical of analyses of all messages.
property] • Holds that unattended
• Holds that unattended messages are weakened but
messages are blocked and the information they contain
filtered out. is still available.
• Processing is based on the • Processing is determined by
meaning/importance. threshold.

The Schema Theory

Ulric Neisser (1976) offered a completely different conceptualization of attention, called


schema theory. He argued that we don’t filter, attenuate, or forget unwanted material.
Instead, we never acquire it in the first place. Neisser compared attention to apple picking.
The material we attend to is like apples we pick off a tree—we grasp it. Unattended material
is analogous to the apples we don’t pick.

Neisser believes, with unattended information: It is simply left out of our cognitive
processing. Neisser and Becklen (1975) performed a relevant study of visual attention. They
created a “selective looking” task by having participants watch one of two visually
superimposed films. One film showed a “hand game,” two pairs of hands playing a familiar
hand-slapping game many of us played as children. The second film showed three people
passing or bouncing a basketball, or both. Participants in the study were asked to “shadow”
(attend to) one of the films and to press a key whenever a target event (such as a hand slap
in the first film or a pass in the second film) occurred.

Neisser and Becklen (1975) found, first, that participants could follow the correct film rather
easily, even when the target event occurred at a rate of 40 per minute in the attended film.
Participants ignored occurrences of the target event in the unattended film. Participants
also failed to notice unexpected events in the unattended film. For example, participants
monitoring the ballgame failed to notice that in the hand game film, one of the players
stopped hand slapping and began to throw a ball to the other player. Neisser (1976)
believed that skilled perceiving rather than filtered attention explains this pattern of
performance.

Neisser and Becklen (1975, pp. 491–492) argued that once picked up, the continuous and
coherent motions of the ballgame (or of the hand game) guide further pickup; what is seen
guides further seeing. It is implausible to suppose that special “filters” or “gates,” designed
on the spot for this novel situation, block the irrelevant material from penetrating deeply
into the “processing system.” The ordinary perceptual skills of following visually given
events “are simply applied to the attended episode and not to the other.”

Simply put Schema theory states that all knowledge is organized into units, and within these
units of knowledge, or schemata (plural), is stored information. A schema, then, is
generalized description or a conceptual system for understanding knowledge-how
knowledge is represented and how it is used. According to this theory, schemata represent
knowledge about concepts, objects and the relationships they have with other objects,
situations, events, and sequences of events, actions and sequences of actions.

Resource and Capacity Allocation Model


Daniel Kahneman (1973) presented a slightly different model for what attention is. He
viewed attention as a set of cognitive processes for categorizing and recognizing stimuli. The
more complex the stimulus, the harder the processing, and therefore the more resources
are engaged. However, people have some control over where they direct their mental
resources: They can often choose what to focus on and devote their mental effort to.

Essentially, this model depicts the allocation of mental resources to various cognitive tasks.
[An analogy could be made to an investor depositing money in one or more of several
different bank accounts—here, the individual “deposits” mental capacity to one or more of
several different tasks. Many factors influence this allocation of capacity, which itself
depends on the extent and type of mental resources available.] The availability of mental
resources, in turn, is affected by the overall level of arousal, or state of alertness.

Kahneman (1973) argued that one effect of being aroused is that more cognitive resources
are available to devote to various tasks. Paradoxically, however, the level of arousal also
depends on a task’s difficulty. This means we are less aroused while performing easy tasks,
such as adding 2 and 2, than we are when performing more difficult tasks, such as
multiplying a Social Security number by pi. We therefore bring fewer cognitive resources to
easy tasks, which, fortunately, require fewer resources to complete.

Arousal thus affects our capacity (the sum total of our mental resources) for tasks. But the
model still needs to specify how we allocate our resources to all the cognitive tasks that
confront us. Look again at Figure 4-5, this time at the region labeled “allocation policy.”
Note that this policy is affected by an individual’s enduring dispositions (for example, your
preference for certain kinds of tasks over others), momentary intentions (your vow to find
your meal card right now, before doing anything else!), and evaluation of the demands on
one’s capacity (the knowledge that a task you need to do right now will require a certain
amount of your attention).

This model predicts that we pay more attention to things we are interested in, are in the
mood for, or have judged important. In Kahneman’s (1973) view, attention is part of what
the layperson would call “mental effort.” The more effort expended, the more attention
we are using.
A related factor is alertness as a function of time of day, hours of sleep obtained the night
before, and so forth. Sometimes we can attend to more tasks with greater concentration. At
other times, such as when we are tired and drowsy, focusing is hard. Effort is only one factor
that influences performance on a task. Greater effort or concentration results in better
performance of some tasks—those that require resource-limited processing, performance
of which is constrained by the mental resources or capacity allocated to it (Norman &
Bobrow, 1975). Taking a midterm is one such task. Performance is said to be data limited,
meaning that it depends entirely on the quality of the incoming data, not on mental effort
or concentration.

The Multi-mode Theory

The Multimode theory combines both physical and semantic inputs into one theory. Within
this model attention is assumed to be flexible system following different depths of
perceptual analysis. The feature that gathers awareness is dependent upon the person’s
needs at the time. Switching from physical and semantic features as a basis for selection
yields costs and benefits. Stimulus information will be attended to via an early selection
through sensory analysis, then as it increases in complexity; semantic analysis is involved,
compensating for attentions limited capacity. Shifting from early to late selection models
reduces the significance of stimuli rendering one’s attention, though it increases breadth of
attention. Researchers found that semantic selection requires greater attention resources
that physical selection. Johnston and Heinz (1978) have proposed the Multimode theory
into three stages; Stage 1 is the initial stage where sensory representations of stimuli are
recognized which corresponds to Broadbent’s filter theory. Stage 2 is the stage where
semantic representations (meanings) are constructed and this corresponds to the Deutsch
and Deutsch model of attention. The final stage is the stage where both sensory and
semantic representations enter consciousness. As a result, this stage is called the
consciousness. It is also suggested that more processing requires more mental effort. When
the messages are selected on the basis of stage one processing (early selection), less mental
effort is required than when the selection is based on stage three processing (late selection).

Note: Attention is a flexible system that allows selection of one over another.

SENSORY •STAGE 1
REPRESENTATIONS

SEMANTIC •STAGE 2
REPRESENTATIONS

CONSCIOUSNESS •STAGE 3

Yerkes- Dodson Law


Yerkes-Dodson law suggests that the level of arousal beyond which performance begins to
decline is a function of task difficulty. This theory was developed by Robert M.
Yerkes and John Dillingham Dodson in 1908. In other words, the law states that when tasks
are simple, a higher level of arousal lead to better performance; when tasks are difficult,
lower level of arousal lead to better performance. Optimal arousal leads to optimal
performance and strong anxiety leads to impaired performance. For example, students who
experience test anxiety (a high level of arousal) may seek out ways to reduce that anxiety to
improve test performance.

There exist large individual differences with respect to preferred arousal level. So, although
arousal theory provides useful insights into the nature of motivation, the fact that we
cannot readily predict what will constitute an optimal level of arousal does limit its
usefulness to a degree.
MODULE 3: SENSATION AND PERCEPTION

UNIT 1: THEORIES OF PERCEPTION: (TOP DOWN & BOTTOM UP VIEWS): GESTALT


APPROACH, GIBSON- AFFORDANCE THEORY, MARR & NISHIHARA- COMPUTATIONAL
APPROACH, GREGORY- INFERENTIAL THEORY, NEISSER-SCHEMA THEORY.

Gibson- Affordance Theory

James Gibson (1979) proposed the affordance theory which is also known as the theory of
direct perception. He propounded this theory as an opposition to the constructivist
approach to perception and associationism. According to Gibson’s theory of direct
perception, the information in our sensory receptors, including the sensory context, is all we
need to perceive anything. As the environment supplies us with all the information we need
for perception, this view is sometimes also called ecological perception. In other words, we
do not need higher cognitive processes or anything else to mediate between our sensory
experiences and our perceptions. Existing beliefs or higher-level inferential thought
processes are not necessary for perception.

Gibson rejected the idea that perceivers construct mental representations from memories
of past encounters with similar objects and events. Instead, Gibson believed that the
perceiver does very little work, mainly because the world offers so much information,
leaving little need to construct representations and draw inferences. He proposed that
perception consists of the direct acquisition of information from the environment.
According to this view, called direct perception, the light hitting the retina contains highly
organized information that requires little or no interpretation. In the world we live in,
certain aspects of stimuli remain invariant (or unchanging), despite changes over time or in
our physical relationship to them.

Gibson (1979) believed that we use this contextual information directly. In essence, we are
biologically tuned to respond to it. According to Gibson, we use texture gradients as cues for
depth and distance. Those cues aid us to perceive directly the relative proximity or distance
of objects and of parts of objects. He called the information offered by the environment to
the organism as affordances. J. J. Gibson (1979) claimed that affordances of an object are
also directly perceived. J. J. Gibson (1950) became convinced that patterns of motion
provide a great deal of information to the perceiver. His work with selecting and training
pilots in World War II led him to thinking about the information available to pilots as they
landed their planes. He developed the idea of optic flow. Thus, information given from the
environment are of four types that help in direct perception, viz,

• Optic flow of the pattern


• texture gradient
• horizon ratio
• direct perception

Gibson’s model sometimes is referred to as an ecological model (Turvey, 2003). This


reference is a result of Gibson’s concern with perception as it occurs in the everyday world
(the ecological environment) rather than in laboratory situations, where less contextual
information is available. Ecological constraints apply not only to initial perceptions but also
to the ultimate internal representations (such as concepts) that are formed from those
perceptions (Hubbard, 1995; Shepard, 1984). Continuing to wave the Gibsonian banner was
Eleanor Gibson (1991, 1992), James’ wife. She conducted landmark research in infant
perception.

Direct perception may also play a role in interpersonal situations when we try to make sense
of others’ emotions and intentions (Gallagher, 2008). Neuroscience also indicates that direct
perception may be involved in person perception. Fodor and Pylyshyn (1981) argued that
the theory is not helpful in explaining perception. They charged that Gibson failed to specify
just what kinds of things are invariant and what kinds are not.

Criticisms

• Individual sensory capacity differences [every individual have different sensory


capacity which leads to difference in perception].
• When the environment changes, perception also changes [not all perceptions are
new as it may become difficult for the perceiver to take appropriate actions].
• There are chances of cognitive analysis differences in illusions.

Marr & Nishihara- Computational Approach


The computational approach to perception was put forward by David Marr and H. K.
Nishihara (1978). This theory incorporates both top-down and bottom-up processing in their
concept to explain the phenomenon of perception. This model is very technical and
mathematical by nature. Marr proposed that perception proceeds in terms of several
different, special purpose computational mechanisms, such as a module to analyze color,
another to analyze motion, and so on. Each operates autonomously, without regard to the
input from or output to any other module, and without regard to real-world knowledge.

He held that visual system works on three levels,

1. Computational Level
It is the top level of the visual system in which the performance of the device is
characterized by mapping from one kind of information to another, the abstract
properties of this mapping are defined precisely, and its appropriateness and
adequacy for the task at hand are demonstrated. In other words, it is a form of task
analysis of a cognitive system. In this level the perceiver identifies specific
information and general constraints upon any solution to that problems.
2. Algorithm Level
It is the center of the visual system where the choice of representation for the input
and output and the algorithm to be used to transform one into the other. In other
words, this level deals the method of information-processing task. The perceiver
identifies the input and output information to transform the input into output
information. It involves the process of encoding information.
Computational • detects edges and boundaries
and separates figure from
level
background.

• edges are worked out by


analysing the data and separate
Algorithmic level image into contours and areas of
similarity for object recognition.

•concerned with whether


Implementation we have the relevant
hardware neurons and
level connections necessary
for the processing.

3. Implementation Level
At the extreme are the details of the way algorithm and representation are realized
physically. In other words, implementation level finds the physical realization for the
algorithm level. It helps identify neural structures that realizes the basic
representational states to which the algorithm applies and transform those
representational states according to the algorithm.

Marr believed that visual perception proceeds by constructing four different mental
representations, or sketches.

a) Gray scale sketch


It intends to represent the intensity value at each point in the image. At this stage
colour information is not processed and images are in grey scale.
b) Primal sketch
It depicts areas of relative brightness and darkness in a two-dimensional image as
well as localized geometric structure. This allows the viewer to detect boundaries
between areas but not to “know” what the visual information “means”. The primal
sketch consists of a set of blobs oriented in various directions.

• detects edges,
RAW PRIMAL texture
differences;
SKETCH made of tiny
dots and pixels.

• connect edges to
FULL form shape—
textures of similarity
PRIMAL to fill in the
form(based on
SKETCH Gestalt principles).

c) 2.5 D Sketch
Once a primal sketch is created, the viewer uses it to create a more complex
representation, called a 21⁄2-D (two-and-a-half-dimensional) sketch. Using cues such
as shading, texture, edges, and others, the viewer derives information about what
the surfaces are and how they are positioned in depth relative to the viewer’s own
vantage point at that moment. 2.5 D sketch is in a viewer-centered coordinate
frame.
d) 3D Sketch
Marr believed that both the primal sketch and the 21⁄2-D sketch rely almost
exclusively on bottom-up processes. Information from real-world knowledge or
specific expectations (that is, top-down knowledge) is incorporated when the viewer
constructs the final, 3-D sketch of the visual scene. This sketch involves both
recognition of what the objects are and understanding of the “meaning” of the visual
scene. This representation describes shapes and their spatial organization in an
object-centered coordinate frame.

Merits

• It provides basic information of life-span visual processing.


• It helps understand perception and information processes on a computational level.
Limitations

• The theory is a higher level of explanation on perception.


• Unfamiliar objects cannot be easily processed according to the processes of this
theory.

Primitives [Refer]

Gregory- Inferential Theory

Sensory data

Perceptual hypothesis

Knowledge stored in the brain

Inferences about sensory information or data

The inferential theory was developed by Richard Gregory and Irwin Rock (1970). They
regarded perception as a constructive approach based on top-down processing. According
to this theory, people recognize objects by generating a perceptual hypothesis like a
researcher who selects the hypothesis that is substantiated by the evidences. According to
him, sensory information is incomplete and in order to complete it we use of stored
knowledge. Inferential theory provides a contradictory explanation to Gibson’s theory of
affordance.

In this way we are actively constructing our perception of reality based on our environment
and stored information. Stimulus information from our environment is frequently
ambiguous so to interpret it, we require higher cognitive information either from past
experiences or stored knowledge in order to makes inferences about what we perceive.
Helmholtz called it the ‘likelihood principle’. A lot of information reaches the eye, but much
is lost by the time it reaches the brain (Gregory estimates about 90% is lost).

Limitations

• Making inferences based on stored information can lead to mistakes as it can be


biased by familiarity.
• Fails to explain the process behind generation of perceptual hypothesis, decision of
right hypothesis, ruling out wrong ones etc….
• Illusions may prove that sensory data can be wrongly perceived.

Neisser-Schema Theory

The schema theory of Ulric Neisser (1976) is also known as the interactive theory of
perception. Neisser’s (1976) perceptual cycle model (PCM) structures the interaction
between a person’s internal schemata (or mental templates) and the environment in which
they work. Neisser's model of perception as a cyclical process in which top-down processing
and bottom-up processing drive each other in turn. To be purely data-driven we'd need to
be mindless automatons; to be purely theory-driven we'd need to be disembodied
dreamers. An active schema sets up relevant expectations for a particular context and if the
sensory data breaks these expectations this may modify the schema or trigger a more
relevant one. He agrees with Gibson that sensory information (in context) is sufficient for
perception.
Neisser held that the process of perception is cyclical in order to understand the world.
There are three cyclical processes involved in perceptual cycle:

1) Schemata is modified
2) Attention is modified
3) Available information is modified

In this perceptual cycle expectations direct our perceptual exploration. Perceptual


exploration is facilitated by our various sense organs and locomotive actions. The perceiver
scans their perceptual field and take in the necessary information leaving out the rest.
Perceptual exploration provides us with information that modify our schema.

There are two ways of modifying our schema to facilitate perceptual exploration. They are:

a) Corrective aspect
It refers to the fact that information from the world can be used to inform that
wrong anticipatory schema has been called up- schema is either modified or
changed.
b) Elaborative aspect
It refers to the fact that coherency and depth of schema built up with use.

This model is grounded in ecological validity.

UNIT 2: THEORIES OF PATTERN RECOGNITION: BIEDERMAN-GEON THEORY, NEISSER-VIEW


BASED APPROACH. SELFRIDGE--PANDEMONIUM MODEL, ELEANOR GIBSON, &LEWIN-
DISTINCTIVE FEATURES

Biederman-Geon Theory

Irving Biederman (1987) proposed a theory of object perception that uses a type of featural
analysis that is also consistent with some of the Gestalt principles of perceptual
organization. This theory is also known as the Recognition- by- components (RBC) theory.
The recognition- by- components theory explains our ability to perceive 3-D objects with the
help of simple geometric shapes. It proposes that all complex forms are composed of geons.
For example, a cup is composed of two geons: a cylinder (for the container portion) and an
ellipse (for the handle). (See Figure 8 for examples of geons and objects.) Geon theory, as
espoused by Biederman proposes that the recognition of an object, such as a telephone, a
suitcase, or even more complex forms, consists of recognition by components (RBC) in which
complex forms are broken down into simple forms.

Biederman proposed that when people view objects, they segment them into simple
geometric components, called geons. Biederman proposed a total of 36 such primitive
components (visual primitives). He believed; we can construct mental representations of a
very large set of common objects. We divide the whole into the parts, or geons (named for
“geometrical ions”; Biederman, 1987, p. 118). We pay attention not just to what geons are
present but also to the arrangement of geons. The geons also can be recomposed into
alternative arrangements. You know that a small set of letters can be manipulated to
compose countless words and sentences. The geons are simple and are viewpoint-invariant
(i.e., distinct from various viewpoints). One test of geon theory developed by Biederman is
in the use of degraded forms.

This recognition of components is carried out in two stages as elaborated below:

i. Edge extraction
In this stage we try to extract core information from retinal image of the geons.
ii. Encoding non-accidental features
Here, the detected geons are matched with the memory to form a meaningful
perception.

Stimuli on retina

Edge information of minute features is extracted


Each information is detected to

Features are matched with memory

Limitations

• Does not adequately explain how we recognize particular features.


• Fails to explain the effects of prior expectations and environmental context on some
phenomena of pattern perception.
• Not all perception researchers accept the notion of geons as fundamental units of
object perception (Tarr and Bulthoff, 1995).

Neisser-View Based Approach

The view-based approach of object or pattern recognition by Ulric Neisser (1967) holds that
objects are recognized holistically through the comparison with a stored analogy. These are
viewer dependent approaches to perception. One major View based approach is the
template matching theory.

Template theories suggest that we have stored in our mind’s myriad sets of templates.
Templates are highly detailed models for patterns we potentially might recognize. We
recognize a pattern by comparing it with our set of templates. We then choose the exact
template that perfectly matches what we observe (Selfridge & Neisser, 1960). It holds that a
great number of templates have been created by our life experience, each template being
associated with a meaning. In other words, the process of perception thus involves
comparing incoming information to the templates we have stored, and looking for a match.
If a number of templates match or come close, we need to engage in further processing to
sort out which template is most appropriate. Notice that this model implies that somewhere
in our knowledge base we’ve stored millions of different templates—one for every distinct
object or pattern we can recognize. So in order to form a template either should be
modified or stimuli should be modified to suit the barin.

Merits

It seems apparent that to recognize a shape, a letter, or some visual forms, some contact
with a comparable internal form is necessary. The objects in the external reality need to be
recognized as matching a memory in the long-term memory.

Limitations

• Storing, organizing, and retrieving so many templates in memory would be unwieldy.


• Does not explain the process of new template formation.
• People recognize many patterns as more or less the same thing, even when the
stimulus patterns differ greatly.
• Fail to explain some aspects of the perception of letters.

Selfridge--Pandemonium Model

The pandemonium model by Oliver Selfridge (1959) is a type of feature matching theory. It
attempts to match features of a pattern to features stored in memory, rather than to match
a whole pattern to a template or a prototype. The word “pandemonium” refers to a very
noisy, chaotic place and hell. In it, metaphorical “demons” with specific duties receive and
analyze the features of a stimulus (Selfridge, 1959). In Oliver Selfridge’s Pandemonium
Model, there are four kinds of demons:

DEMONS

IMAGE FEATURE COGNITIVE DECISION


DEMONS DEMONS DEMONS DEMONS
The “image demons” receive a retinal image and pass it on to “feature demons.” Each
feature demon (aka., sub-demons) calls out when there are matches between the stimulus
and the given feature. These matches are yelled out at demons at the next level of the
hierarchy, the “cognitive (thinking) demons.” The cognitive demons in turn shout out
possible patterns stored in memory that conform to one or more of the features noticed by
the feature demons. A “decision demon” listens to the pandemonium of the cognitive
demons. It decides on what has been seen, based on which cognitive demon is shouting the
most frequently (i.e., which has the most matching features). Although Selfridge’s model is
one of the most widely known.

This theory provides a hierarchical model of object recognition by incorporating the process
of feature detection and prototype matching.

(Refer diagram in text)

Merit

• Flexible theory

Limitation

• Fail to explain the role of environment and experience in feature matching.

Other feature matching theories

Although Selfridge’s model is one of the most widely known, other feature models have
been proposed. Most also distinguish not only different features but also different kinds of
features, such as global versus local features. Local features constitute the small-scale or
detailed aspects of a given pattern. There is no consensus as to what exactly constitutes a
local feature. Nevertheless, we generally can distinguish such features from global features,
the features that give a form its overall shape.

Globally, the stimuli in panels (a) and (b) form the letter H. In panel (a), the local features
(small Hs) correspond to the global ones. In panel (b), comprising many local letter Ss, they
do not. In one study, participants were asked to identify the stimuli at either the global or
the local level (Navon, 1977). When the local letters were small and positioned close
together, participants could identify stimuli at the global level (the “big” letter) more quickly
than at the local level. When participants were required to identify stimuli at the global
level, whether the local features (small letters) matched the global one (big letter) did not
matter. They responded equally rapidly whether the global H was made up of local Hs or of
local Ss. However, when the participants were asked to identify the “small” local letters,
they responded more quickly if the global features agreed with the local ones. In other
words, they were slowed down if they had to identify local (small) Ss combining to form a
global (big) H instead of identifying local (small) Hs combining to form a global (big) H. This
pattern of results is called the global precedence effect (see also Kimchi, 1992). Experiments
have showed that global information dominates over local information even in infants
(Cassia, Simion, Milani, & Umiltà, 2002).

In contrast, when letters are more widely spaced, as in panels (a) and (b) of the effect is
reversed. Then a local precedence effect appears. That is, the participants more quickly
identify the local features of the individual letters than the global ones, and the local
features interfere with the global recognition in cases of contradictory stimuli (Martin,
1979). So, when the letters are close together at the local level, people have problems
identifying the local stimuli (small letters) if they are not concordant with the global stimulus
(big letter). When the letters on the local level are relatively far apart from each other, it is
harder for people to identify the global stimulus (big letter) if it is not concordant with the
local stimuli (small letters). Other limitations (e.g., the size of the stimuli) besides special
proximity of the local stimuli hold as well, and other kinds of features also influence
perception.

Eleanor Gibson, &Lewin-Distinctive Features

Several feature-analysis theories propose a more flexible approach, in which a visual


stimulus is composed of a small number of characteristics or components (Gordon, 2004).
Each characteristic is called a distinctive feature. They argue that we store a list of
distinctive features for each letter. For example, the distinctive features for the letter R
include a curved component, a vertical line, and a diagonal line. When you look at a new
letter, your visual system notes the presence or absence of the various features. It then
compares this list with the features stored in memory for each letter of the alphabet.
Eleanor Gibson (1969) propose that the distinctive features for each alphabet letters remain
constant, whether the letter is handwritten, printed, or typed. These models can also
explain how we perceive a wide variety of two-dimensional patterns.

The distinct feature approach of Eleanor Gibson and Lewin (1975) is an object-centered view
of pattern recognition where environment has no influence. Here brain neural cells in the
cerebral cortex activate from past experience to recognize the distinctive features of stimuli.
Distinctive features are defining characteristics of letters like the slanting line of R that
distinguish it from P. These distinctive features remain the same regardless of
fond/orientation. Neurological evidence also supports the identification of distinctive
features. This model is also known as PB model.

Limitations

• This theory may explain how letters are recognised but cannot explain how we
recognize complex real-world objects like a horse.
• Studies show that sometimes we notice distinctive feature of a whole figure but may
miss the same if features are presented in isolation. In real world we can see an
object only as a whole.
• Could not explain complexity of pattern recognition.
• Which brain cell detector for which stimuli not specified.

Tarr & Bulthoff Theory

Tarr and Bulthof (1995) hold that object recognition can be conceived as a continuum. There
are two approaches for pattern recognition according to them.

• Part based mechanism is used when we need to distinguish among two different
base levels (e.g., Hammer and sparrow). Orientation and perspective do not matter.
• View based approach can be used for making subtle discriminations among
subordinate level category (e.g., Finch and sparrow). Orientation and perspective
matter.

Prototype matching model


[Refer Gallotti]

UNIT 3: THEORIES OF PAIN PERCEPTION: SPECIFICITY, PATTERN AND GATE CONTROL


THEORIES. PAIN THRESHOLD AND PAIN MANAGEMENT.

Pain is defined as an unpleasant and emotional experience associated with or without actual
tissue damage. Pain perception is influenced by avariety of environmental factors including
the context of the stimuli, social responses and contingencies and cultural or ethnic
background.

PAIN

ACUTE CHRONIC
PAIN PAIN
Acute pain is a sharp pain of short duration with easily identified cause. Often it is localized
in a small area before spreading to neighboring areas. Usually it is treated by medications.
Chronic pain is the intermittent or constant pain with different intensities. It lasts for longer
periods. It is somewhat difficult to treat chronic pain and it needs professional expert care.
A number of theories have been postulated to describe mechanisms underlying pain
perception.

Specificity Theory of Pain

The Specificity Theory refers to the presence of dedicated pathways for each somatosensory
modality. The fundamental tenet of the Specificity Theory is that each modality has a
specific receptor and associated sensory fiber (primary afferent) that is sensitive to one
specific stimulus (Dubner et al. 1978). For instance, the model proposes that non-noxious
mechanical stimuli are encoded by low-threshold mechanorecepetors, which are associated
with dedicated primary afferents that project to “mechanoreceptive” second-order neurons
in the spinal cord or brainstem (depending on the source of the input). These second-order
neurons project to “higher” mechanoreceptive areas in the brain. Similarly, noxious stimuli
would activate a nociceptor, which would project to higher “pain” centers through a pain
fiber. These ideas have been emerging over several millennia but were experimentally
tested and formally postulated as a theory in the 19th century by physiologists in Western
Europe.

Pattern Theory of Pain

In an attempt to overhaul theories of somaesthesis (including pain), J. P. Nafe postulated a


“quantitative theory of feeling” (1929). This theory ignored findings of specialized nerve
endings and many of the observations supporting the specificity and/or intensive theories of
pain. The theory stated that any somaesthetic sensation occurred by a specific and
particular pattern of neural firing and that the spatial and temporal profile of firing of the
peripheral nerves encoded the stimulus type and intensity. Lele et al. (1954) championed
this theory and added that cutaneous sensory nerve fibers, with the exception of those
innervating hair cells, are the same. To support this claim, they cited work that had shown
that distorting a nerve fiber would cause action potentials to discharge in any nerve fiber,
whether encapsulated or not. Furthermore, intense stimulation of any of these nerve fibers
would cause the percept of pain (Sinclair 1955; Weddell 1955).

Gate Control Theory of Pain

Psychologist Ronald Melzack and the anatomist Patrick Wall proposed the gate control
theory for pain in 1965 to explain the pain suppression. According to them, the pain stimuli
transmitted by afferent pain fibers are blocked by gate mechanism located at the posterior
gray horn of spinal cord. If the gate is opened, pain is felt. If the gate is closed, pain is
suppressed.

Mechanism of Gate Control at Spinal Level

1. When pain stimulus is applied on any part of body, besides pain receptors, the receptors
of other

sensations such as touch are also stimulated.


2. When all these impulses reach the spinal cord through posterior nerve root, the fibers of
touch sensation (posterior column fibers) send collaterals to the neurons of pain pathway,
i.e. cells of marginal nucleus and substantia gelatinosa.

3. Impulses of touch sensation passing through these collaterals inhibit the release of
glutamate and substance P from the pain fibers.

4. This closes the gate and the pain transmission is blocked.

Role of Brain in Gate Control Mechanism

According to Melzack and Wall, brain also plays some important role in the gate control
system of the spinal cord as follows:

1. If the gates in spinal cord are not closed, pain signals reach thalamus through lateral
spinothalamic tract.

2. These signals are processed in thalamus and sent to sensory cortex.

3. Perception of pain occurs in cortical level in context of the person’s emotional status and
previous experiences.

4. The person responds to the pain based on the integration of all these information in the
brain. Thus, the brain determines the severity and extent of pain.

5. To minimize the severity and extent of pain, brain sends message back to spinal cord to
close the gate by releasing pain relievers such as opiate peptides.

6. Now the pain stimulus is blocked and the person feels less pain.

Significance of Gate Control

Thus, gating of pain at spinal level is similar to presynaptic inhibition. It forms the basis for
relief of pain through rubbing, massage techniques, application of ice packs, acupuncture
and electrical analgesia. All these techniques relieve pain by stimulating the release of
endogenous pain relievers (opioid peptides), which close the gate and block the pain signals.
Limitations of Pain Theories

• Did not account for neurons in the central nervous system (CNS) that respond to
both non-nociceptive and nociceptive stimuli (e.g., wide-dynamic range neurons).
• Focus on cutaneous pain and do not address issues pertaining to deep tissue,
visceral, or muscular pains.
• These models are focused on acute pain and do not address mechanisms of
persistent pain or the chronification of pain.
• Oversimplifications and flaws in the presentation.

Pain Threshold and Pain Management


Pain threshold is the minimum intensity at which a person begins to perceive, or sense, a
stimulus as being painful.

Pain tolerance, is the maximum amount, or level, of pain a person can tolerate or bear.

The threshold for pain can differ between men and women, and can fluctuate
based on many other factors.

Managing pain without medicines

Many non-medicine treatments are available to help you manage your pain. A combination
of treatments and therapies is often more effective than just one.

Some non-medicine options include:

• heat or cold – use ice packs immediately after an injury to reduce swelling. Heat
packs are better for relieving chronic muscle or joint injuries

• physical therapies – such as walking, stretching, strengthening or aerobic exercises


may help reduce pain, keep you mobile and improve your mood. You may need to
increase your exercise very slowly to avoid over-doing it

• massage – this is better suited to soft tissue injuries and should be avoided if the
pain is in the joints. There is some evidence that suggests massage may help manage
pain, but it is not recommended as a long-term therapy

• relaxation and stress management techniques – including meditation and yoga

• cognitive behaviour therapy (CBT) – this form of therapy can help you learn to
change how you think and, in turn, how you feel and behave about pain. This is a
valuable strategy for learning to self-manage chronic pain

• acupuncture – a component of traditional Chinese medicine. Acupuncture involves


inserting thin needles into specific points on the skin. It aims to restore balance
within the body and encourage it to heal by releasing natural pain-relieving
compounds (endorphins). Some people find that acupuncture reduces the severity of
their pain and enables them to maintain function. Scientific evidence for the
effectiveness of acupuncture in managing pain is inconclusive

• transcutaneous electrical nerve stimulation (TENS) therapy – minute electrical


currents pass through the skin via electrodes, prompting a pain-relieving response
from the body. There is not enough published evidence to support the use of TENS
for the treatment of some chronic pain conditions. However, some people with
chronic pain that are unresponsive to other treatments may experience a benefit.

Pain medicines

Many people will use a pain medicine (analgesic) at some time in their lives.

The main types of pain medicines are:

• paracetamol – often recommended as the first medicine to relieve short-term pain

• aspirin – for short-term relief of fever and mild-to-moderate pain (such as period
pain or headache)

• non-steroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen – these medicines


relieve pain and reduce inflammation (redness and swelling)

• opioid medications, such as codeine, morphine and oxycodone – these medicines are
reserved for severe or cancer pain

• local anaesthetics

• some antidepressants

• some anti-epileptic medicines.

How pain medicines work

Pain medicines work in various ways. Aspirin and other NSAIDs are pain medicines that help
to reduce inflammation and fever. They do this by stopping chemicals called prostaglandins.
Prostaglandins cause inflammation, swelling and make nerve endings sensitive, which can
lead to pain.

Prostaglandins also help protect the stomach from stomach acid, which is why these
medicines can cause irritation and bleeding in some people.
Opioid medicines work in a different way. They change pain messages in the brain, which is
why these medicines can be addictive.

Choosing the right pain medicine

The right choice of medicine for you will depend on:

• the location, intensity, duration and type of pain

• any activities that ease the pain or make it worse

• the impact your pain has on your lifestyle, such as how it affects your appetite or
quality of sleep

• your other medical conditions

• other medicines you take.

Managing your medicines effectively

Always follow instructions for taking your medications safely and effectively. By doing so:
• your pain is more likely to be well managed
• you are less likely to need larger doses of medication
• you can reduce your risk of side effects.
Medications for chronic pain are best taken regularly. Talk to your doctor or pharmacist if
your medicines are not working or are causing problems, such as side effects. These are
more likely to occur if you are taking pain medicines for a long time.

It is important to use a variety of strategies to help reduce pain. Do not rely on medicines
alone. People can lower the levels of pain they feel by:

• staying active
• pacing their daily activity so as to avoid pain flares (this involves finding the balance
between under- and overdoing it)

• avoiding pain triggers

• using coping strategies.

Side effects of pain medicines

Some of the side effects of common pain medicines include:

• paracetamol – side effects are rare when taken at the recommended dose and for a
short time. Paracetamol can cause skin rash and liver damage if used in large doses
for a long time

• aspirin – the most common side effects are nausea, vomiting indigestion and
stomach ulcer. Some people may experience more serious side effects such as an
asthma attack, tinnitus (ringing in the ears), kidney damage and bleeding

• non-steroidal anti-inflammatory drugs (NSAIDs) – can cause headache, nausea,


stomach upset, heartburn, skin rash, tiredness, dizziness, ringing in the ears and
raised blood pressure. They can also make heart failure or kidney failure worse, and
increase the risk of heart attack, angina, stroke and bleeding. NSAIDs should always
be used cautiously and for the shortest time possible.

• opioid pain medicines such as morphine, oxycodone and codeine – commonly cause
drowsiness, confusion, falls, nausea, vomiting and constipation. They can also reduce
physical coordination and balance. Importantly, these medicines can lead to
dependence and slow down breathing, resulting in accidental fatal overdose.

Precautions when taking pain medicines

Treat over-the-counter pain medicines with caution, just like any other medication. It’s
always good to discuss any medication with your doctor or pharmacist.

General suggestions include:


• Don’t self-medicate with pain medicines during pregnancy – some can reach the
fetus through the placenta and potentially cause harm.

• Take care if you are elderly or caring for an older person. Older people have an
increased risk of side effects. For example, taking aspirin regularly for chronic pain
(such as arthritis) can cause a dangerous bleeding stomach ulcer.

• When buying over-the-counter pain medicines, speak with a pharmacist about any
prescription and complementary medicines you are taking so they can help you
choose a pain medicine that is safe for you.

• Don’t take more than one over-the-counter medicine at a time without consulting
your doctor or pharmacist. It is easier than you think to unintentionally take an
overdose. For example, many ‘cold and flu’ medicines contain paracetamol, so it is
important not to take any other paracetamol-containing medicine at the same time.

• See your doctor or healthcare professional for proper treatment for sport injuries.
Don’t use pain medicines to ‘tough it out’.

• Consult your doctor or pharmacist before using any over-the-counter medicine if you
have a chronic (ongoing) physical condition, such as heart disease or diabetes.

Managing pain that cannot be easily relieved

Sometimes pain will persist and cannot be easily relieved. It’s natural to feel worried, sad or
fearful when you are in pain. Here are some suggestions for how to handle persistent pain:

• Focus on improving your day-to-day function, rather than completely stopping the
pain.

• Accept that your pain may not go away and that flare-ups may occur. Talk yourself
through these times.

• Find out as much as you can about your condition so that you don't fret or worry
unnecessarily about the pain.
• Enlist the support of family and friends. Let them know what support you need; find
ways to stay in touch.

• Take steps to prevent or ease depression by any means that work for you, including
talking to friends or professionals.

• Don’t increase your pain medicines without talking to your doctor or pharmacist
first. Increasing your dose may not help your pain and might cause you harm.

• Improve your physical fitness, eat healthy foods and make sure you get all the rest
you need.

• Try not to allow the pain to stop you living your life the way you want to. Try gently
reintroducing activities that you used to enjoy. You may need to cut back on some
activities if pain flare-ups occur, but increase slowly again as you did before.

• Concentrate on finding fun and rewarding activities that don't make your pain
worse.

• Seek advice on new coping strategies and skills from a healthcare professional such
as an occupational therapist or psychologist.

UNIT 4: THEORIES OF CONSTANCIES AND ILLUSIONS; (IN DEPTH).

UNIT 5: CLASSICAL AND MODERN PSYCHOPHYSICS: CLASSICAL PSYCHOPHYSICAL METHODS


(IN DETAIL), BRIEF DISCUSSION OF- FETCHER’S CONTRIBUTIONS, WEBBER’S LAW, STEVEN’S
POWER LAW, SIGNAL DETECTION THEORY AND ROC CURVE.

Psychophysics is the study of quantitative relationship between the physical stimulus and
the resulting psychological sensation. The psychophysical scaling methods investigate the
quantitative relationship between subjective measurement of these stimuli and the
objective measurement by the physical scales. In other words, psychophysical scaling
methods intend to discover some definite quantitative relation between the physical
stimulus and the resulting sensation by manipulators of the physical stimulus dimensions.

CLASSICAL PSYCHOPHYSICAL METHODS

Method of Limits

The method of limits is a popular method of determining threshold. The method was so
named by Kraepelin (1891) because a series of stimulus ends when the subject has reached
that limit where he/she changes their judgement. For computing threshold by this method,
two modes of presenting stimulus are usually adopted – the increasing mode and the
decreasing mode. The increasing mode is called the ascending series and the decreasing
mode is called the descending series. For computing DL, the Co is varied in possible small
steps in the ascending and descending series and the subject is required to say in each step
whether the Co is smaller (-), equal to (=), or larger (+) than the St. For computing RL, no St is
needed and the subject simply reports whether or not he has detected change in the
stimulus presented in the ascending and descending series. In computing both the DL and
the RL the stimulus sequence is varied with a minimum change in its magnitude in each
presentation. Hence, Guilford (1954) prefers to call this method the method of minimal
changes.

The thresholds thus found in each series are also called transition points above which the
subject changes his response in the ascending series and below which he also changes his
response in the descending series. Thus, several alternate ascending and descending series
are taken until the experimenter is well satisfied with the relative uniformity of the different
individual thresholds.

There are chances of variability in the subject’s performance due to some variable errors
like changes in motivation, interest, attention etc. besides these variable errors the RL may
also be affected by two constant errors –

1. Error of habituation
Sometimes called the error of perseverance.
It may be defined as a tendency of the subject to go on saying “Yes” in a descending
series or “No” in an ascending series.
Consequence: Inflate the mean of the ascending series over the mean of the
descending series.
2. Error of anticipation
Sometimes called error of expectation.
It is the opposite of error of habituation and accordingly maybe defined as the
tendency to expect a change from “Yes” to “No” in the descending series and “No”
to “Yes” in the ascending series before the change in stimulus is apparent.
Consequence: Inflate the mean of the descending series over the mean of the
ascending series.

The primary purpose of giving ascending and descending series is to cancel out these two
types of constant errors.

Method of Constant Stimuli

Also known as the method of frequency and method of right and wrong cases.

In this method a number of fixed or constant stimuli are presented to the subject several
times in a random order. The method of constant stimuli can also be employed for
determining the RL or DL. For determining RL the different values of the stimulus are
presented to the subject in a random order and he has to report each time whether he
perceives or does not perceive the stimulus. Though the different values of stimulus are
presented irregularly, the same values are presented throughout the experiment a large
number of times, usually from 50-200 times each, in a predetermined order unknown to the
subjects. The mean of the reported values of the stimulus becomes the index of RL. The
procedure involved is known as the method of constant stimuli. For calculating DL, in each
presentation the two stimuli (St and Co) are presented to the subject simultaneously or in
succession (Guilford, 1954). On each trial the subject is required to say whether one
stimulus is “greater” or “less” than the other. The procedure involved is known as the
method of constant stimulus differences and not the method of constant stimuli.
This method is also called the method of right or wrong cases because in each case the
subject has to report whether he perceives the stimulus (right) or he does not perceive the
stimulus (wrong).

Advantages

• Error of habituation and expectation is decreased.


• Presence of two-category judgement.
• No neutral category.
• Can be graphically represented.

Method of Adjustment

Also known as method of average error, method of reproduction or method of equivalent


stimuli.

Oldest method of psychophysics.

In this method the subject is provide with a St and a Co. The Co is either greater or lesser in
intensity than the St. The perceiver is required to adjust the Co until it appears to him to be
equivalent to the St.

The difference between St and Co defines the error in each judgement. A large number of
such judgements are obtained and the arithmetic mean of those judgement is calculated.
Hence, the name ‘method of average error or mean error’ is given. The obtained mean is
the value of PSE (point of subjective equality). The difference between the S t and PSE
indicates the presence of the constant error (CE). If the PSE (average adjustment) is larger
than the St, CE is positive and indicates overestimation of the standard stimulus. On the
other hand, if the PSE is smaller than St, CE is negative and indicates underestimation of the
standard stimulus.

This method is mostly used for obtaining data from Muller-Lyer Illusion experiment. The two
common constant error that can be committed in the method of adjustment and the
Muller-Lyer experiment are:
1) Movement error
It occurs when the subject has a certain bias for one of the two movements, i.e.,
inward movement and outward movement, which unduly helps him in making the
feather-headed line equal to the arrow-headed line.
2) Space error
It occurs when the subject has a certain bias for one of the two spaces in the visual
field, viz, right and left, which helps in adjusting the feather-headed line.

MODERN PSYCHOPHYSICAL METHODS

[Refer A. K. Singh, pg no. 311]

Fetcher’s Contributions, Webber’s Law, Steven’s Power Law

[Refer A. K. Singh, pg no. 307- 310]

Signal Detection Theory (SDT)

Signal-detection theory was one of the first theories to suggest an interaction between the
physical sensation of a stimulus and cognitive processes such as decision making. In other
words, it is a common method used in psychophysical research which allows researchers to
test human abilities related to differentiating between signal and noise. Signal-detection
theory (SDT) is a framework to explain how people pick out the few important stimuli when
they are embedded in a wealth of irrelevant, distracting stimuli. SDT often is used to
measure sensitivity to a target’s presence. When we try to detect a target stimulus (signal),
there are four possible outcomes.

• First, in hits (also called “true positives”), the lifeguard correctly identifies the
presence of a target (i.e., somebody drowning).
• Second, in false alarms (also called “false positives”), he or she incorrectly identifies
the presence of a target that is actually absent (i.e., the lifeguard thinks somebody is
drowning who actually isn’t).
• Third, in misses (also called “false negatives”), the lifeguard fails to observe the
presence of a target (i.e., the lifeguard does not see the drowning person).
• Fourth, in correct rejections (also called “true negatives”), the lifeguard correctly
identifies the absence of a target (i.e., nobody is drowning, and he or she knows that
nobody is in trouble).

In SDT, threshold is determined by two factors as elaborated below:

a) Sensitivity measure
It depends on the intensity of stimuli and sensitivity of the observer. When stimulus
intensity increase, sensitivity measure decreases.
b) Decision-making criterion (β)
It depends on the probability of stimulus and pay-off (reward and punishment).

This is based on top-down processing. Cache trails are used to reduce practice effect and to
avoid error of habituation.

Signal detection theory uses a signal detection matrix to explain process and outcomes of
signal detection. Usually, the presence of a target is difficult to detect. Thus, we make
detection judgments based on inconclusive information with some criteria for target
detections. The number of hits is influenced by where you place your criteria for considering
something a hit.

For example, it might occur with highly sensitive screening tests where positive results lead
to further tests. Thus, overall sensitivity to targets must reflect a flexible criterion for
declaring the detection of a signal. If the criterion for detection is too high, then the doctor
will miss illnesses (misses). If the criterion is too low, the doctor will falsely detect illnesses
that do not exist (false alarms). Sensitivity is measured in terms of hits minus false alarms.
Conservative and strict strategies can help in increasing correct responses and liberal
strategies increase incorrect responses.

Signal-detection theory can be discussed in the context of attention, perception, or


memory:

• attention—paying enough attention to perceive objects that are there;


• perception—perceiving faint signals that may or may not be beyond your perceptual range
(such as a very high-pitched tone);

• memory—indicating whether you have/have not been exposed to a stimulus before, such
as whether the word “champagne” appeared on a list that was to be memorized.

Researchers use measures from signal-detection theory to determine an observer’s


sensitivity to targets in various tasks.

Receiver Operating Characteristic (ROC) Curve

ROC curve depicts data from signal detection experiments. It shows the relationship
between the probability of hit (true positive) and false alarm (false positive). In this curve,
the experimenter manipulates the criterion because the experimenter manipulates the pay-
off.

Originally developed for detecting enemy airplanes and warships during the World War II,
the receiver operating characteristic (ROC) has been widely used in the biomedical field
since the 1970s in, for example, patient risk group classification, outcome prediction and
disease diagnosis. Today, it has become the gold standard for evaluating/comparing the
performance of a classifier(s).

A ROC curve is a two-dimensional plot that illustrates how well a classifier system works as
the discrimination cut-off value is changed over the range of the predictor variable. The x
axis or independent variable is the false positive rate for the predictive test. The y axis or
dependent variable is the true positive rate for the predictive test. Each point in ROC space
is a true positive/false positive data pair for a discrimination cut-off value of the predictive
test. If the probability distributions for the true positive and false positive are both known, a
ROC curve can be plotted from the cumulative distribution function. In most real
applications, a data sample will yield a single point in the ROC space for each choice of
discrimination cut-off. A perfect result would be the point (0, 1) indicating 0% false positives
and 100% true positives. The generation of the true positive and false positive rates requires
that we have a gold standard method for identifying true positive and true negative cases.
To better understand a ROC curve, we will need to review the contingency table or
confusion matrix. A confusion matrix (also known as an error matrix) is a contingency table
that is used for describing the performance of a classifier/classification system, when the
truth is known.

Limitations

1. To calculate AUC, sensitivity and specificity values are summarized over all possible cut-
off values, and this can be misleading because only one cut-off value is used in making
predictions.

2. Different study populations might have different patient characteristics; a ROC model
developed using data generated from one population might not be directly transferred to
another population. A training and a validation set approach can be used to evaluate the
performance of a classifier.

3. Depending on disease prevalence and costs associated with misclassification, the optimal
classifier might vary from one situation to another.

4. ROC curves are most useful when the predictors are continuous.
MODULE 4: MEMORY

UNIT 1: ENCODING: THEORIES AND MODELS OF MEMORY: JAMES - TWO STORE MODEL,
ATKINSON &SHIFRIN (3STORE) - INFORMATION PROCESSING APPROACH, CRAIK
LOKHART&TULVING- LEVELS OF PROCESSING, ZINCHENKO- LEVELS OF RECALL.

James - Two Store Model

Early interest in a dualistic model of memory began in the late 1890 when William James
distinguished between immediate memory, which he called primary memory, and indirect
memory, which he called secondary memory. James based much of his depiction of the
structure of memory on introspection, and he viewed secondary memory as the dark
repository of information once experienced but no longer easily accessible.

According to James, primary memory, closely related but not identical to what is now called
short-term memory (STM), never left consciousness and gave a faithful rendition of events
just perceived. Secondary memory, or long-term memory (LTM), was conceptualized as
paths, etched into the brain tissue of people but with wide individual differences. For James,
memory was dualistic in character, both transitory and permanent. However, little scientific
evidence was presented to distinguish operationally between the two systems. The
relationship between primary

memory and secondary memory were described by Waugh and Norman (1965). In their
early model, an item enters primary memory and then may be held there by rehearsal or
may be forgotten. With rehearsal, the item enters secondary memory and becomes part of
the permanent memory.

James’s dualistic memory model made good sense from an introspective standpoint. It also
seems valid from the standpoint of the structural and processing features of the brain.
Later, evidence for two memory states would come from physiological studies.
Performance by animals in learning trials is poorer when the trials are followed immediately
by electroconvulsive shock. That this is the case (while earlier learning is unaffected)
suggests that transfer from primary memory to secondary memory may be interfered with
(Weiskrantz, 1966). Furthermore, there is a large body of behavioral evidence— from the
earliest experiments on memory to the most recent reports in the psychological literature—
that supports a dualistic theory. A primacy and recency effect for paired associates was
discovered by Mary Whiton Calkins, a student of William James. When a person learns a
series of items and then recalls them without attempting to keep them in order, the primacy
and recency effect is seen whereby items at the beginning (primacy) of the list and the end
(recency) of the list are recalled best. This effect is consistent with a dual memory concept.

While the primacy-recency effect is pretty robust, there is a notable exception to this called
the von Restorff effect in which a letter in the middle of a list is novel, relative to the other
list items. For example, imagine a list of 20 digits, with the letter A located in the middle of
the list. Most if not all people will remember this middle item. Because primacy and recency
effects had been known for a long time, their incorporation into a two-process model of
memory seemed a logical step. In such a model, information gathered by our sensory
system is rapidly transferred to a primary memory store and is either replaced by other
incoming information or held there by rehearsal. With a lot of other information coming in,
as in list learning, information held in STM is bumped out by new information. Take, for
example, how items from a list might be entered into STM. Since rehearsal is required to
transfer information into LTM, the first items on the list will have more rehearsal time and a
greater opportunity to be transferred. As the middle items from the list come in, they
compete with each other and bump each other out. While items at the end of the list aren’t
rehearsed as long, they are still retained in STM at the time of recall given the recency in
which they were learned. We can trace the storage capacity of STM by identifying the point
at which the recent curve begins to emerge. The number of items in that span is rarely
larger than eight, thereby lending support to a dual memory model that includes a STM
system that has limited capacity.

Atkinson &Shiffrin (3store) - Information Processing Approach

Richard Atkinson and Richard Shiffrin (1968) proposed an alternative model that
conceptualized memory in terms of three memory stores. This modal model of memory,
assumes that information is received, processed, and stored differently for each kind of
memory (Atkinson & Shiffrin, 1968; Waugh & Norman, 1965). The three memory stores are
sensory store, short-term store and long-term store.

• a sensory store, capable of storing relatively limited amounts of information for very
brief periods;
• a short-term store, capable of storing information for somewhat longer periods but
of relatively limited capacity as well; and
• a long-term store, of very large capacity, capable of storing information for very long
periods, perhaps even indefinitely (Richardson-Klavehn & Bjork, 2003).

The model differentiates among structures for holding information, termed stores, and the
information stored in the structures, termed memory. Today, cognitive psychologists
commonly describe the three stores as sensory memory, short-term memory, and long-term
memory. Also, Atkinson and Shiffrin were not suggesting that the three stores are distinct
physiological structures. Rather, the stores are hypothetical constructs— concepts that are
not themselves directly measurable or observable but that serve as mental models for
understanding how a psychological phenomenon works.

Atkinson and Shiffrin argued that memories in short-term memory are fragile, and they
could be lost within about 30 seconds unless they are repeated. In addition, Atkinson and
Shiffrin proposed control processes, or intentional strategies—such as rehearsal—that
people may use to improve their memory (Hassin, 2005; Raaijmakers & Shiffrin, 2002). The
original form of this model focused on the role of short-term memory in learning and
memory. The model did not explore how short-term memory is central when we perform
other cognitive tasks (Roediger et al., 2002). The Atkinson-Shiffrin model played a central
role in the growing appeal of the cognitive approach to psychology.

Atkinson and Shiffrin make an important distinction between the concepts of memory and
memory stores; they use the term “memory” to refer to the data being retained, while
“store” refers to the structural component that contains the information. Simply indicating
how long an item has been retained does not necessarily reveal where it is located in the
structure of memory. In their model, this information in the short-term store can be
transferred to the long-term store, while other information can be held for several minutes
in the short-term store and never enter the long-term store. The short-term store was
regarded as the working system, in which entering information decays and disappears
rapidly. Information in the short-term store may be in a different form than it was originally
(e.g., a word originally read by the visual system can be converted and represented
auditorially). Information contained in the long-term store was envisioned as relatively
permanent, even though it might be inaccessible because of interference of incoming
information. The function of the long-term store was to monitor stimuli in the sensory
register (and thus controlling information entering the short-term store) and to provide
storage space for information in the short-term store.

This Atkinson-Shiffrin model emphasizes the passive storage areas in which memories are
stored; but it also alludes to some control processes that govern the transfer of information
from one store to another.
Craik Lokhart & Endel Tulving- Levels of Processing

In 1972, Fergus Craik and Robert Lockhart wrote an article about the depth-of processing
approach. This article became one of the most influential publications in the history of
research on memory (Roediger, Gallo, & Geraci, 2002). The levels-of processing approach
argues that deep, meaningful kinds of information processing led to more permanent
retention than shallow, sensory kinds of processing. (This theory is also called the depth-of-
processing approach.) The levels-of-processing approach predicts that your recall will be
relatively accurate when you use a deep level of processing. For instance, you used deep
processing when you considered a word’s meaning (e.g., whether it would fit in a sentence).
The levels-of-processing approach predicts that your recall will be relatively poor when you
use a shallow level of processing. For example, you will be less likely to recall a word when
you considered its physical appearance (e.g., whether it is typed in capital letters) or its
sound (e.g., whether it rhymes with another word).

The fundamental assumption is that retention and coding of information depend on the
kind of perceptual analysis done on the material at encoding. In other words, The major
hypothesis emerging from Craik and Lockhart’s (1972) paper was that deeper levels of
processing should produce better recall. Some kinds of processing, done at a superficial or
“shallow” level, do not lead to very good retention. Other kinds of “deeper” (more
meaningful or semantic) processing improve retention. According to the levels-of-processing
view, improvement in memory comes not from rehearsal and repetition but from greater
depth of analysis of the material. Craik and Tulving (1975) performed a typical levels-of-
processing investigation. Participants were presented with a series of questions about
particular words. Each word was preceded by a question, and participants were asked to
respond to the questions as quickly as possible; no mention was made of memory or
learning. Any learning that is not in accord with the participant’s purpose is called incidental
learning.

In one experiment, three kinds of questions were used. One kind asked the participant
whether the word was printed in capital letters. Another asked if the target word rhymed
with another word. The third kind asked if the word fit into a particular sentence (for
example, “The girl placed the _____ on the table”). The three kinds of questions were meant
to induce different kinds of processing. To answer the first kind of question, you need look
only at the typeface (physical processing). To answer the second, you need to read the word
and think about what it sounds like (acoustic processing). To answer the third, you need to
retrieve and evaluate the word’s meaning (semantic processing). Presumably, the “depth”
of the processing needed is greatest for the third kind of question and least for the first kind
of question. As predicted, Craik and Tulving (1975) found that on a surprise memory test
later, words processed semantically were remembered best, followed by words processed
acoustically. However, the experiment gave rise to an alternative explanation: Participants
spent more time answering questions about sentences than they did questions about
capital letters.
Craik and Tulving (1975) found that people were about three times as likely to recall a word
if they had originally answered questions about its meaning rather than if they had originally
answered questions about the word’s physical appearance. Numerous reviews of the
research conclude that deep processing of verbal material generally produces better recall
than shallow processing (Craik, 1999, 2006; Lockhart, 2001; Roediger & Gallo, 2001).

Deep levels of processing encourage recall because of two factors: distinctiveness and
elaboration. Distinctiveness means that a stimulus is different from other memory traces.
The second factor that operates with deep levels of processing is elaboration, which
requires rich processing in terms of meaning and interconnected concepts (Craik, 1999,
2006; Smith, 2006).

There are no distinct boundaries between one level and the next. The emphasis in this
model is on processing as the key to storage. The level at which information is stored will
depend, in large part, on how it is encoded. Moreover, the deeper the level of processing,
the higher, in general, is the probability that an item may be retrieved (Craik & Brown,
2000).

Other research demonstrates that deep processing also enhances our memory for faces. For
instance, people recognize more photos of faces if they had previously judged whether the
person looked honest, rather than judging a more superficial characteristic, such as the
width of the person’s nose (Bloom & Mudd, 1991; Sporer, 1991). People also recall faces
better if they have been instructed to pay attention to the distinctions between faces
(Mäntylä, 1997).

Craik and Lockhart (1972) viewed memory as a continuum of processes, from the “transient
products of sensory analyses to the highly durable products of semantic . . . operations”.
Baddeley (1978) presented a thorough critique of the levels-of-processing

approach. First, he argued that without a more precise and independent definition of
“depth of processing,” the usefulness of the theory was very limited. Second, he reviewed
studies that showed, under certain conditions, greater recall of information processed
acoustically than semantically. Finally, he described ways in which the modal view of
memory could explain the typical levels-of-processing findings.
Nonetheless, the levels-of-processing approach did help to reorient the thinking of memory
researchers, drawing their attention to the importance of the way material is encoded. The
approach has helped cognitive psychologists think about the ways in which people approach
learning tasks. It has reinforced the idea that the more “connections” an item has to other
pieces of information (such as retrieval cues), the easier it will be to remember, a point that
fits nicely with the idea of encoding specificity.

Zinchenko- Levels of Recall

Zinchenko (1962, 1981), a Russian psychologist, held that the deeper the level of processing
encouraged by the question, the higher the level of recall achieved.

UNIT 2: WORKING MEMORY MODELS: BADDELEY& HITCH (DECLARATIVE) & ANDERSON’S


ACT* MODEL (PROCEDURAL).

Baddeley & Hitch- Working Memory Model (Declarative)

Working memory can be conceptualized as a type of workbench in which new and old
information are constantly being transformed, combined, and updated. Working memory
challenges, the view that STM is simply another “box” in the head—a simple processing
station along the way to either being lost or sent on to LTM. The concept of working
memory also challenges the idea that the capacity of STM is limited to about seven items.
Baddeley argues that the span of memory is determined by the speed with which we
rehearse information. In the case of verbal material, he proposed that we have a
phonological loop that contains the phonological store and articulatory process in which we
can maintain as much information as we can rehearse in a fixed duration

Working memory holds only the most recently activated, or conscious, portion of long-term
memory, and it moves these activated elements into and out of brief, temporary memory
storage (Dosher, 2003). Alan Baddeley has suggested an integrative model of memory
(Baddeley, 1990a, 1990b, 2007, 2009). It synthesizes the working-memory model with the
LOP framework. Essentially, he views the LOP framework as an extension of, rather than as
a replacement for, the working-memory model. Alan Baddeley and Graham Hitch (1974)
performed a series of experiments to test LOP model. The general design was to have
participants temporarily store a number of digits (thus absorbing some of the STS storage
capacity) while simultaneously performing another task, such as reasoning or language
comprehension. These tasks were also thought to require resources from STS—specifically,
the control processes mentioned earlier. The hypothesis was that if the STS capacity is taken
up by stored digits, fewer resources are available for other tasks, so performance on other
tasks suffers.

From the results they concluded that a common system does seem to contribute to
cognitive processes such as temporarily storing information, reasoning, and comprehending
language. Filling up STM with six digits does hurt performance on a variety of cognitive
tasks, suggesting that this system is used in these tasks. However, the memory loads used,
thought to be near the limit of STM capacity, do not totally disrupt performance. Because
researchers think STM has a capacity of about seven items, plus or minus two, the six-digit
memory load should have essentially stopped any other cognitive activity. Baddeley and
Hitch (1974) therefore argued for the existence of what they called working memory (WM).
They see WM as consisting of a limited-capacity “workspace” that can be divided between
storage and control processing.

Baddeley originally suggested that working memory comprises five elements:

• The visuospatial sketchpad,

The visuospatial sketchpad, briefly holds some visual images. It is similar to the
phonological loop but is responsible for visual and spatial tasks, which might include
remembering sizes and shapes or the speed and direction of moving objects. The
sketchpad is also involved in the planning of spatial movements such as exiting a
burning building. This sketchpad allows you to look at a complex scene and gather
visual information about objects and landmarks. It also allows you to navigate from
one location to another (Logie & Della Sala, 2005). Incidentally, the visuospatial
sketchpad has been known by a variety of different names, such as visuo-spatial
scratchpad, visuo-spatial working memory, and short-term visual memory (Cornoldi
& Vecchi, 2003; Hollingworth, 2004). The visuospatial sketchpad allows you to store
a coherent picture of both the visual appearance of the objects and their relative
positions in a scene (Cornoldi & Vecchi, 2003; Hollingworth, 2004, 2006; Logie &
Della Salla, 2005). The visuospatial sketchpad also stores visual information that you
encode from verbal stimuli (Baddeley, 2006; Pickering, 2006a). For example, when a
friend tells a story, you may find yourself visualizing the scene. The capacity of the
visuospatial sketchpad is limited.

• The phonological loop,

The phonological loop briefly holds inner speech for verbal comprehension and for
acoustic rehearsal. We use the phonological loop for a number of everyday tasks,
including sounding out new and difficult words and solving word problems. There
are two critical components of this loop. One is phonological storage, which holds
information in memory. The other is subvocal rehearsal, which is used to put the
information into memory in the first place. When subvocal rehearsal is inhibited, the
new information is not stored. This phenomenon is called articulatory suppression.
Articulatory suppression is more pronounced when the information is presented
visually versus aurally (e.g., by hearing). The amount of information that can be
manipulated within the phonological loop is limited. Thus, we can remember fewer
long words compared with short words (Baddeley, 2000b). Without this loop,
acoustic information decays after about 2 seconds. There is only a limited amount of
information and one determinant is the time to vocalize the word. Researchers also
report that the relationship between pronunciation time and recall accuracy holds
true, whether you actually pronounce the words aloud or use subvocalization,
pronouncing the words silently.

• The central executive,

The third element is a central executive, which both coordinates attentional


activities and governs responses. The central executive is critical to working memory
because it is the gating mechanism that decides what information to process further
and how to process this information. It decides what resources to allocate to
memory and related tasks, and how to allocate them. It is also involved in higher-
order reasoning and comprehension and is central to human intelligence. The
phonological loop and visuospatial sketchpad are regulated by the central executive,
which coordinates attentional activities and governs responses. The central
executive acts much like a supervisor who decides which issues deserve attention,
which will be ignored, and what to do if systems go awry.

According to the working-memory model, the central executive integrates


information from the phonological loop, the visuospatial sketchpad, the episodic
buffer, and from long-term memory. The central executive also plays a major role in
focusing attention, planning strategies, transforming information, and coordinating
behavior (Baddeley, 2001; Reuter-Lorenz & Jonides, 2007). The central executive is
therefore extremely important and complex. However, it is also the least understood
component of working memory (Baddeley, 2006; Bull & Espy, 2006). In addition, the
central executive is responsible for suppressing irrelevant information (Baddeley,
2006; Engle & Conway, 1998; Hasher et al., 2007). In your everyday activities, your
central executive helps you decide what to do next. It also helps you decide what not
to do, so that you do not become sidetracked from your primary goal.

Characteristics

1. The phonological loop and the visuospatial sketchpad both have specialized
storage systems but central executive does not store information.
2. The central executive plays a critical role in the overall functions of working
memory.
3. It decides which issues deserve attention and which should be ignored,
selects strategies, figuring out how to tackle a problem and plays an
important role when we try to solve mathematical problems.
4. It cannot make numerous decisions at the same time, and it cannot work
effectively on two simultaneous projects.
• Subsidiary “slave systems,” and

The fourth element is a number of other “subsidiary slave systems” that perform
other cognitive or perceptual tasks (Baddeley, 1989).

• The episodic buffer.


Baddeley (2000) updated his model to include the episodic buffer. The episodic
buffer is a limited capacity system that combines information from LTM and the
visuospatial sketchpad and phonological loop with the central executive. The
episodic buffer is a limited-capacity system that is capable of binding information
from the visuospatial sketchpad and the phonological loop as well as from long-term
memory into a unitary episodic representation. This component integrates
information from different parts of working memory—that is, visual-spatial and
phonological—so that they make sense to us. This incorporation allows us to solve
problems and re-evaluate previous experiences with more recent knowledge.

In other words, the episodic buffer serves as a temporary storehouse where we can
gather and combine information from the phonological loop, the visuospatial
sketchpad, and long-term memory.

As Baddeley (2006) explains, his original theory had proposed that the central
executive plans and coordinates various cognitive activities. However, the theory
had also stated that the central executive did not actually store any information.
Baddeley therefore proposed the episodic buffer as the component of working
memory where auditory, visual, and spatial information can be combined with the
information from long-term memory. This arrangement helps to solve the
theoretical problem of how working memory integrates information from different
modalities (Morrison, 2005).

This episodic buffer actively manipulates information so that you can interpret an
earlier experience, solve new problems, and plan future activities. For instance,
suppose that you are thinking about an unfortunate experience that occurred
yesterday, when you unintentionally said something rude to a friend. You might
review this event and try to figure out whether your friend seemed offended;
naturally, you’ll need to access some information from your long-term memory
about your friend’s customary behavior. You’ll also need to decide whether you do
have a problem, and, if so, how you can plan to resolve the problem.
Because the episodic buffer is new, we do not have details about how it works and
how it differs from the central executive. However, Baddeley (2000a, 2006) proposes
that it has a limited capacity—just as the capacities of the phonological loop and the
visuospatial sketchpad are limited.

Furthermore, this episodic buffer is just a temporary memory system, unlike the
relatively permanent long-term memory system. Some of the material in the
episodic buffer is verbal (e.g., the specific words you used) and some is visuospatial
(e.g., your friend’s facial expression and how far apart you were standing). The
episodic buffer therefore allows you to temporarily store and integrate information
from both the phonological loop and the visuospatial sketchpad (Gathercole et al.,
2006; Styles, 2006; Towse & Hitch, 2007). This episodic buffer allows us to create a
richer, more complex representation of an event. This complex representation can
then be stored in our long-term memory.

Shortly after the working memory model was introduced, researchers concentrated on
finding out more about the phonological loop, the visuospatial sketchpad, and the nature of
the central executive using conventional psychological measures. Lately, however, cognitive
neuroscience measures have been applied to the model with considerable success. Cabeza
and Nyberg (1997) have shown that the phonological loop is related to bilateral activation of
the frontal and parietal lobes as measured by PET scans. And, in a study done by Haxby,
Ungerleider, Horwitz, Rapoport, and Grady (1995), the visuospatial sketchpad activates
different areas of the cortex. Here it was found that shorter intervals activate the occipital
and right frontal lobes while longer intervals implicate areas of the parietal and left frontal
lobes. Increasingly, observations made possible by brain-imaging technology are being
applied to models of memory, and more and more parts of the puzzle of memory are being
solved.

The revised model acknowledges that information from the systems is integrated.

Anderson’s Act* Model (Procedural)


Collins and Loftus’s (1975) model have been superseded by more complex theories that
attempt to explain broader aspects of general knowledge (Rogers & McClelland, 2004). One
among the major two is Anderson’s ACT theories.

John Anderson et al., of Carnegie Mellon University have constructed a series of network
models called ACT-R (Anderson, 1983, 2000; Anderson & Schooler, 2000; Anderson &
Schunn, 2000; Anderson et al., 2004). ACT-R is an acronym for “Automatic Components of
Thought- Rational”. It attempts to account for all of cognition (Anderson et al., 2005). In his
ACT model, John Anderson synthesized some of the features of serial information-
processing models and some of the features of semantic-network models. In ACT,
procedural knowledge is represented in the form of production systems. Declarative
knowledge is represented in the form of propositional networks.

This theory explains all of cognition including memory, learning, spatial cognition, language,
reasoning, and decision making. The model focuses primarily on declarative knowledge. The
network model devised by Collins and Loftus (1975) focuses on networks for individual
words. Anderson, in contrast, designed a model based on larger units of meaning. According
to Anderson (1990), the meaning of a sentence can be represented by a propositional
network, or pattern of interconnected propositions. Anderson (1985) defined a proposition
as being the smallest unit of knowledge that can be judged to be either true or false.
Anderson and his co-authors define a proposition as the smallest unit of knowledge that can
be judged either true or false. According to the model, each of the following three
statements is a proposition:

1. Susan gave a cat to Maria.

2. The cat was white.

3. Maria is the president of the club.

These three propositions can appear by themselves, but they can also be combined into a
sentence, such as the following:

Susan gave a white cat to Maria, who is the president of the club.

Propositions are abstract; they do not represent a specific set of words. Anderson suggests
that each of the concepts in a proposition can be represented by its own individual network.
Anderson’s model of semantic memory makes some additional proposals.

Similar to Collins and Loftus’s (1975) model, the links between nodes become stronger as
they are used more often.

Practice is vitally important in developing more extensive semantic memory (Anderson &
Schooler, 2000).

The model assumes that, at any given moment, as many as ten nodes are represented in
your working memory.

In addition, the model proposes that activation can spread.

Anderson argues that the limited capacity of working memory can restrict the spreading.
Also, if many links are activated simultaneously, each link receives relatively little activation.
As a consequence, this knowledge will be retrieved relatively slowly (Anderson, 2000).

Anderson and his colleagues are currently conducting research, using functional magnetic
resonance imaging to examine how changes in learning are reflected in selected regions of
the cortex and the subcortex (Anderson et al., 2005; Anderson et al., 2004).
Procedural Knowledge within ACT-R

Such knowledge is represented in production systems rather than in semantic networks.


Knowledge representation of procedural skills occurs in three stages: cognitive, associative,
and autonomous (Anderson, 1980).

Our progress through these stages is called proceduralization (Anderson et al., 2004;
Oellinger et al., 2008). Proceduralization is the overall process by which we transform slow,
explicit information about procedures (“knowing that”) into speedy, implicit,
implementations of procedures (“knowing how”). One means by which we make this
transformation is through composition. During this stage, we construct a

single production rule that effectively embraces two or more production rules. It thus
streamlines the number of rules required for executing the procedure. For example,
consider what happens when we learn to drive a standard-shift car. We may compose a
single procedure for what were two separate procedures. One was for pressing down on the
clutch. The other was for applying the brakes when we reach a stop sign.

These multiple processes are combined together into the single procedure of driving.
Another aspect of proceduralization is “production tuning.” It involves the two
complementary processes of generalization and discrimination. We learn to generalize
existing rules to apply them to new conditions. For example, we can generalize our use of
the clutch, the brakes, and the accelerator to a variety of standard-shift cars.

Finally, we learn to discriminate new criteria for meeting the conditions we face. For
example, if we drive a car with a different number of gears or with different positions for
the reverse gear, we must discriminate the relevant information about the new gear
positions from the irrelevant information about the old gear positions.

An alternative approach to understanding knowledge representation in humans has been to


study the human brain itself. Much of the research in psychobiology has offered evidence
that many operations of the human brain do not seem to process information step-by-step,
bit-by-bit. Rather, the human brain seems to engage in multiple processes simultaneously. It
acts on myriad bits of knowledge all at once. Such models do not necessarily contradict
step-by-step models. First, people seem likely to use both serial and parallel processing.
Second, different kinds of processes may be occurring at different levels. Thus, our brains
may be processing multiple pieces of information simultaneously. They combine into each
of the steps of which we are aware when we process information step by step.

UNIT 3: STORAGE: LONG –TERM MEMORY: FEATURES AND DISTINCTIONS OF: EPISODIC AND
SEMANTIC MEMORY, DECLARATIVE AND PROCEDURAL MEMORY, IMPLICIT AND EXPLICIT
MEMORY, AUTOBIOGRAPHICAL MEMORY, PROSPECTIVE MEMORY, FLASH BULB MEMORY.

Long –Term Memory

• Long-term memory has a large capacity; it contains our memory for experiences and
information that we have accumulated over a lifetime.
• PET studies show that the frontal area of the brain is involved in deep processing of
information, such as determining whether a word describes a living or non-living
thing.
• Some brain regions are essential in the formation of memories. These regions
include the hippocampus and the adjacent cortex and thalamus, as indicated
through the study of clinical patients who suffer damage in these areas. However,
the hippocampus itself does not provide permanent long-term storage of memories.
• Many permanent long-term memories are stored and processed in the cerebral
cortex. It is well established that sensory information is passed along to specific brain
regions. Information from the eyes and ears, for example, is passed to the visual
cortex and auditory cortex, respectively. It is likely that long-term memories for
these types of sensory experiences are also stored in or near these areas.

[Refer BSc Semester 2 notes]

Note:

The term permastore refers to the very long-term storage of information, such as
knowledge of a foreign language (Bahrick, 1984a, 1984b; Bahrick et al., 1993) and of
mathematics (Bahrick & Hall, 1991).

Features and Distinctions of: Episodic and Semantic Memory


Endel Tulving (1972) proposed two types of explicit memory, viz, semantic and episodic
memory. Semantic memory holds information that has entered our general knowledge
base. Information recalled here is generic in nature. Store general information about
language and world. Organization of semantic memory is arranged more on the basis of
meanings and meaning relationships among different pieces of information. For example:
arithmetic facts, historical dates and past tense forms of various verbs.

It is a mental thesaurus, organized knowledge a person possesses about words and other
verbal symbols, their meaning and referents, about relations among them, and about rules,
formulas, and algorithms for the manipulation of these symbols, concepts, and relations.
Semantic memory does not register perceptible properties of inputs, but rather cognitive
referents of input signals. (Tulving, 1993; p. 217)

Semantic memory influences most of our cognitive activities. It includes lexical or language
knowledge (e.g., “The word justice is related to the word equality”). In addition, semantic
memory includes conceptual knowledge (e.g., “A square has four sides”). Categories and
concepts are essential components of semantic memory.

A category is a set of objects that belong together. For example, the category called “fruit”
represents a certain category of food items; your cognitive system treats these objects as
being equivalent (Markman & Ross, 2003). Psychologists use the term concept to refer to
our mental representations of a category (Wisniewski, 2002). For instance, you have a
concept of “fruit,” which refers to your mental representation of the objects in that
category.

Semantic memory allows to code the objects being encountered. Even though the objects
are not identical, you can combine together a wide variety of similar objects by using a
single, one-word concept (Milton & Wills, 2004; Wisniewski, 2002; Yamauchi, 2005). This
coding process greatly reduces the storage space, because many objects can all be stored
with the same label (Sternberg & Ben-Zeev, 2001).

Concepts allows make numerous inferences when encountered with new examples from a
category. Semantic memory allows us to combine similar objects into a single concept.
There are four approaches to the process of semantic memory. They include the feature
comparison model, the prototype approach, the exemplar approach, and network models.
Most theorists in the area of semantic memory believe that each model may be at least
partly correct. The present report deals with the hierarchical or network model of semantic
memory.

Episodic memory focuses on your memories for events that happened to you; it allows you
to travel backward in subjective time to reminisce about earlier episodes in your life.
Episodic memory includes your memory for an event that occurred ten years ago, as well as
a conversation you had 10 minutes ago. In other words, episodic memory stores personally
experienced events or episodes (information about our personal experiences). According to
Tulving, we use episodic memory when we learn lists of words or when we need to recall
something that occurred to us at a particular time or in a particular context. In either case,
we have personally experienced the learning as associated with a given time. Episodic
memory is merely a specialized form of semantic memory (Tulving, 1984, 1986).

Episodic memory has also been described as containing memories that are temporally
dated; the information stored has some sort of marker for when it was originally
encountered. Tulving (1972, 1983, 1989) described episodic and semantic memory as
memory systems that operate on different principles and hold onto different kinds of
information.

Distinction between semantic and episodic memory.

Semantic Memory Episodic Memory

• Stores general knowledge. • Stores personally experienced


• According to HERA (hemispheric events or objects.
encoding/retrieval asymmetry) • According to HERA (hemispheric
model, there is greater activation encoding/retrieval asymmetry)
in the left pre-frontal hemisphere model, there is greater activation in
for tasks requiring retrieval from the right pre-frontal hemisphere for
semantic memory (posterior episodic retrieval tasks (anterior
cerebral cortex). (Tulving, 1989) cerebral cortex).
• Stable and permanent. • Dynamic and susceptible to
• Organization of information is forgetting.
based on the meaning • Organization of information is
(semantics). temporal.

Procedural memory, the lowest form of memory, retains connections between stimuli and
responses and is comparable to what Oakley (1981) referred to as associative memory.
Semantic memory has the additional capability of representing internal events that are not
present, while episodic memory allows the additional capability of acquiring and retaining
knowledge of personally experienced events.

A neuroscientific model called HERA (hemispheric encoding/retrieval asymmetry) attempts


to account for differences in hemispheric activation for semantic versus episodic memories.
According to this model, there is greater activation in the left than in the right prefrontal
hemisphere for tasks requiring retrieval from semantic memory (Nyberg, Cabeza, & Tulving,
1996; Tulving et al., 1994). In contrast, there is more activation in the right than in the left
prefrontal hemisphere for episodic retrieval tasks. This model, then, proposes that semantic
and episodic memories must be distinct because they draw on separate areas of the brain.
For example, if one is asked to generate verbs that are associated with nouns (e.g., “drive”
with

“car”), this task requires semantic memory. It results in greater left-hemispheric activation
(Nyberg, Cabeza, & Tulving, 1996). In contrast, if people are asked to freely recall a list of
words—an episodic-memory task—they show more right hemispheric activation. Some
recent fMRI and ERP studies have not found the predicted frontal asymmetries during
encoding and retrieval (Berryhill et al., 2007; Evans & Federmeier, 2009).

Features and Distinctions of: Declarative and Procedural Memory

Declarative memory is also known as explicit memory. Declarative memory contains


knowledge, facts, information, ideas—basically, anything that can be recalled and described
in words, pictures, or symbols. Anderson (1983) believed that declarative memory stores
information in networks that contain nodes. There are different types of nodes, including
those corresponding to spatial images or to abstract propositions.
In the ACT models, working memory is actually that part of declarative memory that is very
highly activated at any particular moment. The production rules also become activated
when the nodes in the declarative memory that correspond to the conditions of the
relevant production rules are activated. When production rules are executed, they can
create new nodes within declarative memory. Thus, ACT models have been described as
very “activation based” models of human cognition (Luger, 1994).

Disruption in the hippocampus appears to result in deficits in declarative memory (i.e.,


memory for pieces of information), but it does not result in deficits in procedural memory
(i.e., memory for courses of action) (Rockland, 2000).

Declarative memory also may be considered a relatively recent phenomenon. At the same
time, other memory structures may be responsible for nondeclarative forms of memory.

Entrance into long-term declarative memory may occur through a variety of processes. One
method of accomplishing this goal is by deliberately attending to information to
comprehend it. Another is by making connections or associations between the new
information and what we already know and understand. We make connections by
integrating the new data into our existing schemas of stored information. This process of
integrating new information into stored information is called consolidation. In humans, the
process of consolidating declarative information into memory can continue for many years
after the initial experience (Squire, 1986). When you learn about someone or something, for
example, you often integrate new information into your knowledge a long time after you
have acquired that knowledge. For example, you may have met a friend many years ago and
started organizing that knowledge at that time. But you still acquire new information about
that friend—sometimes surprising information—and continue to integrate this new
information into your knowledge base.

In Anderson’s view, episodic and semantic information is included in declarative memory.


Declarative representation of knowledge comes into the system in chunks, or cognitive
units, comprising such things as propositions (such as, “Beth loves Boris”), strings (such as,
“one, two, three”), or even spatial images (“A circle is above the square”). From these basic
elements new information is stored in declarative memory by means of working memory.
The retrieval of information from declarative memory into working memory resembles the
calling up of information from the permanent memory of a computer—data stored on a
hard disk in a computer are temporarily held for processing in a working memory.

In contrast, procedural memory holds information concerning action and sequences

of actions. It is a type of implicit memory or non-declarative memory. For example, when


you ride a bicycle, swim, or swing a golf club, you are

thought to be drawing on your procedural memory.

In the ACT model Anderson also posited the existence of procedural memory in individuals.
Procedural memory, the lowest form of memory, retains connections between stimuli and
responses and is comparable to what Oakley (1981) referred to as associative memory. The
Tower of Hanoi can be solved specifically using the procedural memory.

Research in the area of memory consolidation has shown that people who learned tasks
based on declarative memory (paired associates) or procedural memory (mirror tracing)
showed increased memory of the tasks if they slept during the retention interval (as
opposed to being awake during the retention interval).

The concept of productive memory lies very close to procedural memory. Procedural
memory, or memory for processes, can be tested in implicit-memory tasks as well. Many of
the activities that we do every day fall under the purview of procedural memory; these can
range from brushing your teeth to writing.

In the laboratory, procedural memory is sometimes examined with the rotary pursuit task
(Gonzalez, 2008; see Figure 5.1). The rotary pursuit task requires participants to maintain
contact between an L-shaped stylus and a small rotating disk (Costello, 1967). The disk is
generally the size of a nickel, less than an inch in diameter. This disk is placed on a quickly
rotating platform. The participant must track the small disk with the wand as it quickly spins
around on a platform. After learning with a specific disk and speed of rotation, participants
are asked to complete the task again, either with the same disk and the same speed or with
a new disk or speed. Verdolini-Marston and Balota (1994) noted that when a new disk or
speed is used, participants do relatively poorly. But with the same disk and speed,
participants do as well as they had after learning the task, even if they do not remember
previously completing the task.

Another task used to examine procedural memory is mirror tracing. In the mirror-tracing
task, a plate with the outline of a shape drawn on it is put behind a barrier where it cannot
be seen. Beyond the barrier in the participant’s line of sight is a mirror. When the
participant reaches around the barrier, his or her hand and the plate with the shape are
within view. Participants then take a stylus and trace the outline of the shape drawn on the
plate. When first learning this task, participants have difficulty staying on the shape.
Typically, there are many points at which the stylus leaves the outline. Moreover, it takes a
relatively long time to trace the entire shape. With practice, however, participants become
quite efficient and accurate with this task. Participants’ retention of this skill gives us a way
to study procedural memory (Rodrigue, Kennedy, & Raz, 2005). The mirror-tracing task is
also used to study the impact of sleep on procedural memory.

Connectionist models effectively explain priming effects, skill learning (procedural memory),
and several other phenomena of memory. Procedural memory is a type of non-declarative
memory.

Declarative Memory Procedural Memory

• A memory system thought • A memory system thought to


to contain knowledge, contain information concerning
facts, information, ideas, or action and sequences of
anything that can be actions—for example, one’s
recalled and described in knowledge of how to ride a
words, pictures, or bicycle or swing a golf club.
symbols.
• Explicitly represented and • Implicitly represented and not
consciously accessible (Su, consciously accessible (Su,
Merrill, and Peterson, Merrill, and Peterson, 2001).
2001).
• Stored in cerebral cortex. • Stored in cerebellum.
• Non-REM sleep aids in
declarative memory. • REM sleep aids in procedural
• AKA., explicit memory. memory.
• Declarative memory
consists of information and • Part of implicit memory.
knowledge of the world,
such as the name of a • Procedural memory deals with
favorite aunt, the location motor skills, such as
of the nearest pizza parlor, handwriting, typing skill, and
and the meaning of words, (probably) our ability to ride a
plus a vast lot of other bicycle.
information.

Features and Distinctions of: Implicit and Explicit Memory

Explicit memories are things that are consciously recollected. For example, in recalling your
last vacation, you explicitly refer to a specific time (say, last summer) and a specific event or
series of events. In other words, explicit memory relies largely on the retrieval of conscious
experiences and is cued using recognition and recall tasks. On an explicit memory task, the
researcher directly instructs participants to remember information; the participants are
conscious that their memory is being tested, and the test requires them to intentionally
retrieve some information they previously learned (Roediger & Amir, 2005). Semantic and
episodic memory are two types of explicit memory.

The most common explicit memory test is recall and recognition. Research demonstrates
that people with anterograde amnesia often recall almost nothing on tests of explicit
memory such as recall or recognition. Explicit memory is typically impaired in amnesia
(Amnesia is severe loss of explicit memory). In addition, the hippocampus and some related
nearby cerebral structures appear to be important for explicit memory of experiences and
other declarative information. The hippocampus also seems to play a key role in the
encoding of declarative information (Manns & Eichenbaum, 2006; Thompson, 2000).

Implicit memory, by contrast, is memory that is not deliberate or conscious but shows
evidence of prior learning and storage. In other words, implicit memory is expressed in the
form of facilitating performance and does not require conscious recollection. Procedural
and emotional memory are two types of implicit memory. Schacter (1996) poetically
described implicit memory as “a subterranean world of nonconscious memory and
perception, normally concealed from the conscious mind”. Laboratory work on implicit
memory has been mainly concerned with a phenomenon known as repetition priming.
Repetition priming is priming of a somewhat different sort: facilitation of the cognitive
processing of information after a recent exposure to that same information (Schacter, 1987,
p. 506). For example, participants might be given a very brief exposure (of 30 milliseconds or
less) to a word (such as button) and soon afterward be given a new word completion task
(for example, “Fill in the blanks to create the English word that comes to mind: _U _T O_”).

Another method to examine implicit memory is by measuring tasks involving procedural


knowledge. Procedural memory, or memory for processes, can be tested in implicit-memory
tasks as well. Examples of procedural memory include the procedures involved in riding a
bike or driving a car. In the laboratory, procedural memory is sometimes examined with the
rotary pursuit task (Gonzalez, 2008). Another task used to examine procedural memory is
mirror tracing.

The Process Dissociation Framework

Jacoby and his colleagues (Hay & Jacoby, 1996; Jacoby, 1991, 1998; Toth, Lindsay, & Jacoby,
1992; Toth, Reingold, & Jacoby, 1994) took issue with the idea that implicit memory and
explicit memory represent two distinct memory systems and argued for what he called the
process dissociation framework. Jacoby (1991) preferred to think about memory tasks as
calling on two different processes: intentional and automatic ones.

Performance on direct [that is, explicit] tests of memory typically requires that people
intentionally recollect a past episode, whereas facilitation on indirect [implicit] tests of
memory is not necessarily accompanied by either intention to remember or awareness of
doing so. This difference between the two types of test can be described in terms of the
contrast between consciously controlled and automatic processing.

The model assumes that implicit and explicit memory both have a role in virtually every
response. Thus, only one task is needed to measure both these processes.

Explicit Memory Implicit Memory


• A type of memory retrieval in which
• Memory recovery or recognition
recall is enhanced by the
based on conscious search
presentation of a cue or prime,
processes as one might use in
despite having no conscious
answering a direct question.
awareness of the connection
between the prime and to be-
recalled item.
• Conscious activation
• Non-conscious activation
• Explicit memory decrease with age
• Implicit memory does not decrease
with age.
• Implicit memory does not show the
• There are differences in explicit same changes.
memory over the life span.
• Implicit memory that is comparable
• Infants and older adults often tend to that of young adults.
to have relatively poor explicit
• Aka., non-declarative memory
memory.
[relies on procedural memory].
• Aka., declarative memory
• Implicit memory tasks require
• Explicit memory tasks require perceptual processing (that is,
conceptual processing (in other interpreting sensory information in
words, drawing on information in a meaningful way).
memory and the knowledge base).
• Two types: procedural and
• Two types: episodic and semantic emotional memory.
memory.
Autobiographical Memory, Prospective Memory, Flash Bulb Memory

Autobiographical and flashbulb memory [Refer BSc Semester 2 notes]

Prospective memory is memory for things we need to do or remember in the future. For
example, we may need to remember to call someone, to buy cereal at the supermarket, or
to finish a homework assignment due the next day. We use a number of strategies to
improve prospective memory. Examples are keeping a to-do list, asking someone to remind
us to do something, or tying a string around our finger to remind us that we need to do
something. Research suggests that having to do something regularly on a certain day does
not necessarily improve prospective memory for doing that thing. However, being
monetarily reinforced for doing the thing does tend to improve prospective memory
(Meacham, 1982; Meacham & Singer, 1977).

A prospective-memory task has two components. First, you must establish that you intend
to accomplish a particular task at some future time. Second, at that future time, you must
fulfill your intention (Einstein & McDaniel, 2004; Marsh et al., 1998; McDaniel & Einstein,
2000, 2007). According to surveys, people say that they are more likely to forget a
prospective memory task than any other memory task (Einstein & McDaniel, 2004).
Occasionally, the primary challenge is to remember the content of the action (Schaefer &
Laing, 2000). However, most of the time, the primary challenge is simply to remember to
perform an action in the future (McDaniel & Einstein, 2007).

Prospective memory, like retrospective memory, is subject to decline as we age. Over the
years, we retain more of our prospective memory than of our retrospective memory. This
retention is likely the result of the use of the external cues and strategies that can be used
to bolster prospective memory. In the laboratory, older adults show a decline in prospective
memory; however, outside the laboratory they show better performance than young adults.
This difference may be due to greater reliance on strategies to aid in remembering as we
age (Henry et al., 2004).

Most of the research on prospective memory is reasonably high in ecological validity. One
intriguing component of prospective memory is absentmindedness. Most people do not
publicly reveal their absentminded mistakes. You may therefore think that you are the only
person who forgets to pick up a quart of milk on your way home from school, who dials
Chris’s phone number when you want to speak to Alex, or who fails to include an important
attachment when sending an e-mail.

One problem is that the typical prospective-memory task represents a divided attention
situation. You must focus on your ongoing activity, as well as on the task you need to
remember in the future (Marsh et al., 2000; McDaniel & Einstein, 2000). Absentminded
behavior is especially likely when the intended action causes you to disrupt a customary
schema. That is, you have a customary schema or habit that you usually perform, which is
Action A (for example, driving from your college to your home). You also have a prospective-
memory task that you must perform on this specific occasion, which is Action B (for
example, stopping at the grocery store). In cases like this, your longstanding habit
dominates the more fragile prospective memory, and you fall victim to absentminded
behavior (Hay & Jacoby, 1996).

Prospective-memory errors are more likely in highly familiar surroundings when you are
performing tasks automatically (Schacter, 2001). Errors are also more likely if you are
preoccupied or distracted, or if you are feeling time pressure. In most cases,
absentmindedness is simply irritating. However, sometimes these slips can produce airplane
collisions, industrial accidents, and other disasters that influence the lives of hundreds of
individuals (Finstad et al., 2006).

Methods to Improve Prospective Memory

1. External memory aids are especially helpful on prospective-memory tasks (McDaniel


& Einstein, 2007). An external memory aid is defined as any device, external to
yourself, that facilitates your memory in some way (Herrmann et al., 2002). Some
examples of external memory aids include a shopping list, a rubber band around
your wrist, asking someone else to remind you to do something, and the ring of an
alarm clock, to remind you to make an important phone call.
2. Informal external mnemonics also aid in prospective memory. For example: when
we want to remember to bring a book to class, placing it in a location where we will
have to confront the book on the way to class. Placing letters to be mailed in a
conspicuous position on the dashboard of the car. Using coloured sticky notes in the
room.
3. Forming a vivid, interactive mental image of the action or thing to be remembered.
For example, a vivid, interactive mental image of a quart of milk might help you
avoid driving past the grocery store in an absentminded fashion (Einstein &
McDaniel, 2004).

Prospective Memory Retrospective Memory

• Memory for things we need to do or • Memory system that recalls


remember in the future. information learned in the past.
• Most common memory lapses. • Memory lapses are not as common
• Planning plays a major role. as prospective memory.
• Remembering plays a major role.

Memory is more accurate for both kinds of memory tasks if you use both distinctive
encoding and effective retrieval cues. Furthermore, both kinds of memory are less accurate
when you have a long delay, filled with irrelevant activities, prior to retrieval (Einstein &
McDaniel, 2004; Roediger, 1996). Finally, prospective memory relies on regions of the
frontal lobe that also play a role in retrospective memory (Einstein & McDaniel, 2004; West
et al., 2000).

Note: In a cross-sectional study Lars Nilsson (2003) found short-term memory, semantic
memory, and procedural memory performance was not related to normal aging; however, a
decrease in episodic memory was reported.

UNIT 4: RETRIEVAL: RECALL, RECOGNITION, RECONSTRUCTION, CONFABULATION, ILLUSORY


MEMORY, MEMORY AS AN ACTIVE PROCESS, RELIABILITY OF EYE WITNESS TESTIMONY.

Recall, Recognition and Reconstruction

[Refer BSc Semester 2 notes and General Psychology books]

Confabulation
People with damage to their frontal lobes often engage in a process called confabulation,
which involves making outlandish false statements. One characteristic of confabulation is
that the person believes that even the most impossible-sounding statements are true. It has
been suggested that this may tell us something about the role of the frontal lobes in normal
memory. In other words, confabulation is the problem of failure to adequately check and
validly decide whether something is a genuine memory or an invention shows up
particularly clearly in the case of confabulation, sometimes shown in patients with frontal
lobe damage, whereby they come up with a fantastic and totally false recollection.

In one case for example, a patient “recollected” writing a letter to his aunt announcing the
death of his brother, who in fact was still alive and visited him regularly. When confronted
with this apparent paradox the patient decided that he had in fact had two brothers with
the same name, one of whom had died (Baddeley & Wilson, 1986). Such confabulations are
often held with great conviction, despite their implausibility. The same patient on one
occasion woke up and turned to his wife asking her “Why do you keep telling people we are
married?”. “We are married,” she replied. “We have three children!”. “That does not
necessarily mean that we are married,” retorted her husband. She then proceeded to show
him the wedding photographs to which he responded, “That chap does look like me, but it
isn’t me, because I’m not married.”

Illusory Memory

Memory as an Active Process

Reliability of Eye Witness Testimony

The legal testimony of an eyewitness tends to be very convincing to a jury, but is often
highly unreliable, despite the confidence of the witness. This is particularly problematic
when detailed recall is required, as we often do not retain detail, even of objects or events
that we encounter many times. Recall is also readily distorted by leading questions, or by
false information introduced during the process of cross examination. In recent years,
largely due to the initial work of Elizabeth Loftus, there has been a much wider recognition
of these problems, and improved interview techniques are continually being developed.
Face recognition is a particularly crucial aspect of legal psychology; even when a subject has
seen and remembered a face clearly, it is difficult to convey the information. A number of
techniques for constructing representations of faces have been developed, but their value is
still limited, and the potential for false identification remains high. Line-ups form an
important part of the criminal procedure. They themselves are readily open to manipulation
and error, but again some of the more blatant mistakes can be avoided by carefully
following appropriate procedures.

There are serious potential problems of wrongful conviction when using eyewitness
testimony as the sole, or even the primary, basis for convicting accused people of crimes
(Loftus & Ketcham, 1991; Loftus, Miller, & Burns, 1987; Wells & Loftus, 1984). Moreover,
eyewitness testimony is often a powerful determinant of whether a jury will convict an
accused person. The effect is particularly pronounced if eyewitnesses appear highly
confident of their testimony. This is true even if the eyewitnesses can provide few
perceptual details or offer apparently conflicting responses. People sometimes even think
they remember things simply because they have imagined or thought about them (Garry &
Loftus, 1994). It has been estimated that as many as 10,000 people per year may be
convicted wrongfully on the basis of mistaken eyewitness testimony (Cutler & Penrod, 1995;
Loftus & Ketcham, 1991). In general, people are remarkably susceptible to mistakes in
eyewitness testimony. They are generally prone to imagine that they have seen things they
have not seen (Loftus, 1998).

Some of the strongest evidence for the constructive nature of memory has been obtained
by those who have studied the validity of eyewitness testimony. In a now-classic study,
participants saw a series of 30 slides in which a red Datsun drove down a street, stopped at
a stop sign, turned right, and then appeared to knock down a pedestrian crossing at a
crosswalk (Loftus, Miller, & Burns, 1978). Afterwards, participants were asked a series of 20
questions, one of which referred either to correct information (the stop sign) or incorrect
information (a yield sign instead of the stop sign). In other words, the information in the
question given this second group was inconsistent with what the participants had seen.
Later, after engaging in an unrelated activity, all participants were shown two slides and
asked which they had seen. One had a stop sign, the other had a yield sign. Accuracy on this
task was 34% better for participants who had received the consistent question (stop sign
question) than for participants who had received the inconsistent question (yield sign
question).

Loftus’ eyewitness testimony experiment and other experiments (e.g., Loftus, 1975, 1977)
have shown people’s great susceptibility to distortion in eyewitness accounts.

[Refer Matlin, pg no: 151- 159]

UNIT 5: FORGETTING: DETAILED DISCUSSION OF: INTERFERENCE, DECAY, ORGANIC/


BIOLOGICAL CAUSES, ENCODING FAILURE, FAILURE OF RECONSTRUCTION, MOTIVATED
FORGETTING

[Refer BSc Semester 2 notes & Themes and Variations, pg no: 239- 244]
MODULE 5: COGNITION

UNIT 1: ELEMENTS OF THOUGHT: CONCEPTS, PROPOSITIONS, MENTAL IMAGERY. BRIEF


DISCUSSION OF VARIOUS THEORIES OF CONCEPT FORMATION AND MENTAL IMAGERY
(ANALOG AND PROPOSITIONAL CODING)

Concept an idea about something that provides a means of understanding the world. Medin
(1989) defined a concept as “an idea that includes all that is characteristically associated
with it”. In other words, a concept is a mental representation of some object, event, or
pattern that has stored in it much of the knowledge typically thought relevant to that
object, event, or pattern. It is the fundamental unit of symbolic knowledge (knowledge of
correspondence between symbols and their meaning, for example, that the symbol “3”
means three). Often, a concept may be captured in a single word, such as apple. Each
concept in turn relates to other concepts, such as apple, which relates to redness,
roundness, or fruit. Concepts are dynamic.

Concepts help us establish order in our knowledge base (Medin & Smith, 1984). Concepts
also allow us to categorize, giving us mental “buckets” in which to sort the things we
encounter, letting us treat new, never-before-encountered things in the same way we treat
familiar things that we perceive to be in the same set (Neisser, 1987).

Concepts appear to have a basic level (sometimes termed a natural level) of specificity, a
level within a hierarchy that is preferred to other levels (Medin, Proffitt, & Schwartz, 2000;
Rosch, 1978). Suppose I show you a red, roundish edible object that has a stem and that
came from a tree. You might characterize it as a fruit, an apple, a delicious apple, a Red
Delicious apple, and so on. Most people, however, would characterize the object as an
apple. The basic, preferred level is apple. In general, the basic level is neither the most
abstract nor the most specific. This basic level can be manipulated by context or expertise
(Tanaka & Taylor, 1991).
Concepts are also used in other areas like computer science. Developers try to develop
algorithms that define “spam” so that email programs can filter out unwanted messages
and your mailbox is not flooded with them.

Classical View of Concepts

The classical view of concepts was the dominant view in psychology up until the 1970s and
dates back to Aristotle (Smith & Medin, 1981). This proposal is organized around the belief
that all examples or instances of a concept share fundamental characteristics, or features
(Medin, 1989). In particular, the classical view of concepts holds that the features
represented are individually necessary and collectively sufficient (Medin, 1989). To say a
feature is individually necessary is to say that each example must have the feature if it is to
be regarded as a member of the concept. For example, “has three sides” is a necessary
feature of the concept triangle; things that do not have three sides are automatically
disqualified from being triangles. To say that a set of features is collectively sufficient is to
say that anything with each feature in the set is automatically an instance of the concept.
For example, the set of features “has three sides” and “closed, geometric figure” is sufficient
to specify a triangle; anything that has both is a triangle.

The classical view of concepts has several implications. First, it assumes that concepts
mentally represent lists of features. That is, concepts are not representations of specific
examples but rather abstractions containing information about properties and
characteristics that all examples must have. Second, it assumes that membership in a
category is clear-cut: Either something has all the necessary and sufficient features (in which
case it is a member of the category), or it lacks one or more of the features (in which case it
is not a member). Third, it implies that all members within a category are created equal:
There is no such thing as a “better” or “worse” triangle.

Work by Eleanor Rosch and colleagues (Rosch, 1973; Rosch & Mervis, 1975) confronted and
severely weakened the attraction of the classical view. Rosch found that people judged
different members of a category as varying in “goodness.” For instance, most people in
North America consider a robin and a sparrow very good examples of a bird but find other
examples, such as chickens, penguins, and ostriches, not as good. The classical view holds
that membership in a category is all-or-none: Either an instance (such as robin or ostrich)
belongs to a category, or it doesn’t. This view has no way to explain people’s intuitions that
some birds are “birdier” than others.

A category a concept that functions to organize or point out aspects of equivalence among
other concepts based on common features or similarity to a prototype. For example, the
word apple can act as a category, as in a collection of different kinds of apples. But it also
can act as a concept within the category fruit. Concepts and categories can be divided in
various ways. One commonly used distinction is between natural categories and artifact
categories (Kalenine et al., 2009; Medin, Lynch, & Solomon, 2000). Natural categories are
groupings that occur naturally in the world, like birds or trees. Artifact categories are
groupings that are designed or invented by humans to serve particular purposes or
functions. Examples of artifact categories are automobiles and kitchen appliances. The
speed it takes to assign objects to categories seems to be about the same for both natural
and artifact categories (VanRullen & Thorpe, 2001). Natural and artifact categories are
relatively stable and people tend to agree on criteria for membership in them.

Some categories are created just for the moment or for a specific purpose, for example,
“things you can write on.” These categories are called ad hoc categories (Barsalou, 1983;
Little, Lewandowsky, & Heit, 2006). They are described not in words but rather in phrases.
Their content varies, depending on the context. Categorization also allows us to make
predictions and act accordingly. If I see a four-legged creature with a tail coming toward me,
my classification of it as either a dog or a wolf has implications for whether I’ll want to call
to it, run away, pet it, or call for help.

Proposition is an assertion, which may be either true or false. Anderson and his co-authors
define a proposition as the smallest unit of knowledge that can be judged either true or
false. For instance, the phrase white cat does not qualify as a proposition because we
cannot determine whether it is true or false. It helps to store knowledge as abstract
concepts.
Propositions may be used to describe any kind of relationship. Examples of relationships
include actions of one thing on another, attributes of a thing, positions of a thing, class
membership of a thing, and so on. The key idea is that the propositional form of mental
representation is neither in words nor in images. Rather, it is in an abstract form
representing the underlying meanings of knowledge. Thus, a proposition for a sentence
would not retain the acoustic or visual properties of the words. Similarly, a proposition for a
picture would not retain the exact perceptual form of the picture (Clark & Chase, 1972).

According to the propositional view (Clark & Chase, 1972), both images [e.g., of the cat and
the table in Figure 7.3(a)] and verbal statements are mentally represented in terms of their
deep meanings, and not as specific images or words. That is, they are represented as
propositions. According to propositional theory, pictorial and verbal information are
encoded and stored as propositions. Then, when we wish to retrieve the information from
storage, the propositional representation is retrieved. From it, our minds re-create the
verbal or the imaginal code relatively accurately.

Some evidence suggests that these representations need not be exclusive. People seem to
be able to employ both types of representations to increase their performance on cognitive
tests (Talasli, 1990).

Propositional theory suggests that we do not store mental representations in the form of
images or mere words. We may experience our mental representations as images, but these
images are epiphenomena—secondary and derivative phenomena that occur as a result of
other more basic cognitive processes. According to propositional theory, our mental
representations (sometimes called “mentalese”) more closely resemble the abstract form of
a proposition.

Mental Imagery is the mental representation of things that are not currently seen or sensed
by the sense organs (Moulton & Kosslyn, 2009; Thomas, 2003). In our minds we often have
images for objects, events, and settings. Mental imagery even can represent things that you
have never experienced. For example, imagine what it would be like to travel down the
Amazon River. Mental images even may represent things that do not exist at all outside the
mind of the person creating the image.

Imagery may involve mental representations in any of the sensory modalities, such as
hearing, smell, or taste. Nonetheless, most research on mental imagery in cognitive
psychology has focused on visual imagery, such as representations of objects or settings
that are not presently visible to the eyes. When students kept a diary of their mental
images, the students reported many more visual images than auditory, smell, touch, or taste
images (Kosslyn et al., 1990). Most of us are more aware of visual imagery than of other
forms of imagery. We use visual images to solve problems and to answer questions
involving objects (Kosslyn & Rabin, 1999; Kosslyn, Thompson & Ganis, 2006).

Many psychologists outside of cognitive psychology are interested in applications of mental


imagery to other fields in psychology. Such applications include using guided-imagery
techniques for controlling pain and for strengthening immune responses and otherwise
promoting health. Research also indicates that

the use of mental images can help to improve memory. In the case of persons with Down
syndrome, the use of mental images in conjunction with hearing a story improved memory
for the material as compared with just hearing the story (de la Iglesia, Buceta, & Campos,
2005; Kihara & Yoshikawa, 2001). Mental imagery also is used in other fields such as
occupational therapy. Using this technique, patients with brain damage train themselves to
complete complex tasks.

Dual Code Theory

According to dual-code theory, we use both pictorial and verbal codes for representing
information (Paivio, 1969, 1971) in our minds. These two codes organize information into
knowledge that can be acted on, stored somehow, and later retrieved for subsequent use.
According to Paivio, mental images are analog codes. Analog codes resemble the objects
they are representing. For example, trees and rivers might be represented by analog codes.
Just as the movements of the hands on an analog clock are analogous to the passage of
time, the mental images we form in our minds are analogous to the physical stimuli we
observe.
In contrast, our mental representations for words chiefly are represented in a symbolic
code. A symbolic code is a form of knowledge representation that has been chosen
arbitrarily to stand for something that does not perceptually resemble what is being
represented. Just as a digital watch uses arbitrary symbols (typically, numerals) to represent
the passage of time, our minds use arbitrary symbols (words and combinations of words) to
represent many ideas. For example, sand can be used as well to represent the flow of time.

Paivio, consistent with his dual-code theory, noted that verbal information seems to be
processed differently than pictorial information. For example, in one study, participants
were shown both a rapid sequence of pictures and a sequence of words (Paivio, 1969). They
then were asked to recall the words or the pictures in one of two ways. One way was at
random, so that they recalled as many items as possible, regardless of the order in which
the items were presented. The other way was in the correct sequence.

Participants more easily recalled the pictures when they were allowed to do so in any order.
But they more readily recalled the sequence in which the words were presented than the
sequence for the pictures, which suggests the possibility of two different systems for recall
of words versus pictures.

UNIT 2: MODELS OF KNOWLEDGE ORGANIZATION (IN SEMANTIC MEMORY): PROTOTYPE,


FEATURE COMPARISON, HIERARCHICAL MODEL, CONNECTIONIST MODELS (PARALLEL
DISTRIBUTED PROCESSING) OF MCCLELLAND, RUMELHART, & HINTON), NETWORKS
MODELS –QUILLIAN, SPREADING ACTIVATION - COLLINS & LOFTUS, SCHEMAS.

Prototype Model

Prototype theory groups things together not by their defining features but rather by their
similarity to an averaged model of the category. According to a theory proposed by Eleanor
Rosch, we organize each category on the basis of a prototype. Prototype is an item that is
the most representative of a category. According to prototype approach, an individual
decides whether an item belong to a category by comparing it with a prototype. If the item
is similar to the prototype; you include that item in the category. A prototype is an abstract,
idealized example.
Rosch (1973) also emphasizes that members of a category differ in their prototypicality, or
degree to which they are prototypical. A robin and a sparrow are very prototypical birds,
whereas ostriches and penguins are non-prototypes.

The prototype approach represents a different perspective from the feature comparison
model that we just examined. According to the feature comparison model, an item belongs
to a category as long as it possesses the necessary and sufficient features (Markman, 1999).

Characteristics of prototype

• Prototypes are supplied as examples of a category.


• Prototypes are judged more quickly after semantic priming.
• Prototypes share attributes in a family resemblance category.

According to Eleanor Rosch’s prototype theory, there are various levels of categorization in
semantic memory. An object can be categorized at several different levels. Some category
levels are called superordinate-level categories, which means they are higher-level or more
general categories. “Furniture,” “animal,” and “tool” are all examples of super ordinate level
categories. Basic-level categories are moderately specific. “Chair,” “dog,” and “screwdriver”
are examples of basic-level categories. Finally, subordinate-level categories refer to lower-
level or more specific categories. “Desk chair,” “collie,” and “Phillips screwdriver” are
examples of subordinate categories.

One advantage of the prototype approach is that it can account for our ability to form
concepts for groups that are loosely structured. For example, we can create a concept for
stimuli that merely share a family resemblance, when the members of a category have no
single characteristic in common. Another advantage of the prototype approach is that it can
be applied to social relationships, as well as inanimate objects and non-social categories
(Fehr, 2005).

However, an ideal model of semantic memory must also acknowledge that concepts can be
unstable and variable. Another problem with the prototype approach is that we often do
store specific information about individual examples of a category. An ideal model of
semantic memory would therefore need to include a mechanism for storing this specific
information, as well as abstract prototypes (Barsalou, 1990, 1992). An additional problem is
that the prototype approach may operate well when we consider the general population,
but not when we examine experts in a particular discipline.

Feature Comparison Model

One logical way to organize semantic memory would be in terms of lists of features.
According to an early theory, called the feature comparison model, concepts are stored in
memory according to a list of necessary features or characteristics. People use a decision
process to make judgments about these concepts (Smith et al., 1974). It accounts for the
typicality effect.

Smith, Shoben, and Rips (1974) proposed one alternative to the hierarchical semantic
network model, called a feature comparison model of semantic memory. The assumption
behind this model is that the meaning of any word or concept consists of a set of elements
called features.

Smith and his co-authors (1974) propose that the features used in this model are either
defining features or characteristic features. Defining features are those attributes that are
necessary to the meaning of the item. For example, the defining features of a robin include
that it is living and has feathers and a red breast. Characteristic features are those attributes
that are merely descriptive but are not essential. For example, the characteristic features of
a robin include that it flies, perches in trees, is not domesticated, and is small in size. In
other words, Features come in two types: defining, meaning that the feature must be
present in every example of the concept, and characteristic, meaning the feature is usually,
but not necessarily, present.

In the Smith et al. (1974) model, the verification of sentences such as “A robin is a bird” is
carried out in two stages. In the first stage, the feature lists (containing both the defining
and the characteristic features) for the two terms are accessed, and a quick scan and
comparison are performed. If the two lists show a great deal of overlap, the response “true”
is made very quickly. If the overlap is very small, then the response “false” is made, also very
quickly. If the degree of overlap in the two feature lists is neither extremely high nor
extremely low, then a second stage of processing occurs. In this stage, a comparison is made
between the sets of defining features only. If the lists match, the person responds “true”; if
the lists do not match, the person responds “false.”

The sentence verification technique is one of the major tools used to explore the feature
comparison model. One common finding in research using the sentence verification
technique is the typicality effect. In the typicality effect, people reach decisions faster when
an item is a typical member of a category, rather than an unusual member. Sentences such
as “A robin is a bird” are verified more quickly than sentences such as “A turkey is a bird”
because robins, being more typical examples of birds, are thought to share more
characteristic features with “bird” than do turkeys. The feature comparison model also
explains fast rejections of false sentences, such as “A table is a fruit.” In this case, the list of
features for “table” and the list for “fruit” presumably share very few entries.

The feature comparison model also provides an explanation for a finding known as the
category size effect (Landauer & Meyer, 1972). This term refers to the fact that if one term is
a subcategory of another term, people will generally be faster to verify the sentence with
the smaller category. That is, people are faster to verify the sentence “A collie is a dog” than
to verify “A collie is an animal,” because the set of dogs is part of the set of animals. The
feature comparison model explains this effect as follows. It assumes that as categories grow
larger (for example, from robin, to bird, to animal, to living thing), they also become more
abstract. With increased abstractness, there are fewer defining features. Thus in the first
stage of processing there is less overlap between the feature list of a term and the feature
list of an abstract category.
The model can also explain how “hedges” such as “A bat is sort of like a bird” are processed.
Most of us know that even though bats fly and eat insects, they are really mammals. The
feature comparison model explains that the processing of hedges consists of a comparison
of the characteristic features but not the defining features. Because bats share some
characteristic features with birds (namely, flying and eating insects), we agree they are “sort
of like” birds.
Research on another aspect of the feature comparison model clearly contradicts this
approach. Specifically, a major problem with the feature comparison model is that very few
of the concepts we use in everyday life can be captured by a specific list of necessary,
defining features.

Another problem with the feature comparison model is its assumption that the individual
features are independent of one another. However, many features are correlated for the
concepts we encounter in nature. Finally, the feature comparison model does not explain
how the members of categories are related to one another (Barsalou, 1992).

The loss of semantic memory leads to a condition called semantic dementia. Thus, semantic
memory is the memory system that retains information about facts.

Hierarchical Model

Storing mental representations and templates for every stimulus may overload our database
of knowledge. One way to conserve memory space would be to try to avoid storing
redundant information wherever possible. Rather than storing the information with the
mental representation it is better to store it once, at the higher-level representation. This
illustrates the principle of cognitive economy: Properties and facts are stored at the highest
level possible. To recover information an individual use inference. Information stored at one
level of the hierarchy is not repeated at other levels. A fact is stored at the highest level to
which it applies. For example, the fact that birds breathe is stored in the ANIMAL category,
not the BIRD category.

The idea that information is stored in categories was studied in detail by Collins and Quillian
(1969). They proposed a hierarchical model of semantic memory in which concepts or words
were nodes. They tested the idea that semantic memory is analogous to a network of
connected ideas. Each node is connected to related nodes by means of pointers, or links
that go from one node to another. These idea of linked lists and pointers were derived from
the field of computer science. Thus, the node that corresponds to a given word or concept,
together with the pointers to other nodes to which the first node is connected, constitutes
the semantic memory for that word or concept. The collection of nodes associated with all
the words and concepts one knows about is called a semantic network.
Collins and Quillian (1969) also tested the principle of cognitive economy. They reasoned
that if semantic memory is analogous to a network of nodes and pointers and if semantic
memory honors the cognitive economy principle, then the closer a fact or property is stored
to a particular node, the less time it should take to verify the fact and property. The
assumption is that the longer it takes you to respond to a stimulus, the more mental steps
you had to go through to make that response. It also explains about retrieval from LTM. This
was experimentally studied by Collins and Quillian (1969) through “speeded verification
task”.

Speeded verification task

Collins and Quillian (1969) presented people with a number of similar sentences to find
whether it took people less time to respond to sentences whose representations should
span two levels than they did to sentences whose representations should span three.
Suppose the statement is, “A canary can sing.” When you hear, “A canary”, this activates
the canary category in memory. You then scan the properties of the canary category for
relevant information. If you find it, you stop the search process and respond. In this case,
you would respond “true”. Suppose the statement is “A canary has wings.” You start by
performing the same steps as before, but you don’t find relevant information. So, you
follow the line up to the next category, BIRD. You then scan the contents of the category for
relevant information. You find “has wings” in this category so you stop the search and
respond “true”. This is 2 steps more than you had with the previous statement. Mental
steps take time to perform. Your reaction time should be longer than it was to “A canary
can sing”. Suppose the statement is, “A canary eats.” You go through all the steps you did
with the previous statement plus 2 more: move up one level of the hierarchy to ANIMAL,
then scan the properties. The retrieval process is similar to a form of logical deduction called
a syllogism. In a syllogism you are given two premises and then a conclusion. That is, with
the statement, “A canary eats”, it’s like you think, “All animals eat. A canary is an animal.
Therefore, canary eats.”
The model was called a hierarchical semantic network model of semantic memory,
because researchers thought the nodes were organized in hierarchies. Most nodes in the
network have superordinate and subordinate nodes. A superordinate node corresponds to
the name of the category of which the thing corresponding to the subordinate node is a
member. So, for example, a node for “cat” would have the superordinate node of “animal”
and perhaps several subordinate nodes, such as “Persian,” “tabby,” and “calico.”

Meyer and Schvaneveldt (1971) tried to elaborate the semantic network model. They held
that if related words are stored close by one another and are connected to one another in a
semantic network, then whenever one node is activated or energized, energy spreads to the
related nodes. They demonstrated it through a series of experiments on lexical decision
tasks. Here, participants saw a series of letter strings and are asked to decide, as quickly as
possible, if the letter strings form real words. Thus, they respond yes to strings such as
bread and no to strings such as rencle.

In their study, participants saw two words at a time, one above the other, and had to decide
if both strings were words or not. They discovered that if one of the strings was a real word
(such as bread), participants were faster to respond if the other string was a semantically
associated word (such as butter) than if it was an unrelated word (such as chair) or a
nonword (such as rencle). One interpretation of this finding invokes the concept of
spreading activation, the idea that excitation spreads along the connections of nodes in a
semantic network. There occurs a priming effect in participants’ recognition of nodes. It is
also a very important idea in understanding connectionist networks.

Similarly, a research on word superiority effect also offered explanation along the same
lines of Meyer and Schvaneveldt (1971). People are generally faster to recognize a particular
letter (such as D or K) in the context of a word (such as WOR_) than they are to recognize it
with no context or in the context of a nonword (such as OWR_). This means the word
context helps letter recognition because a node corresponding to a word is activated in the
former case. This automatic activation facilitates recognition of all parts of the word, thus
facilitating letter recognition. The Meyer and Schvaneveldt (1971) extended this idea to
opine that individual nodes can be activated not just directly, from external stimuli, but
indirectly, through spreading activation from related nodes.

Limitations of Collins and Quillian’s model (1969)

• Property of association

Carol Conrad (1972) held that it violates the assumption of cognitive economy.
Properties are associated with each category in the hierarchy, not just the highest
category. Participants in her sentence verification experiments were no slower to
respond to sentences such as “A shark can move” than to “A fish can move” or “An
animal can move.” However, the principle of cognitive economy would predict that
the property “can move” would be stored closest to the node for “animal” and thus
that the three sentences would require decreasing amounts of time to verify. Conrad
argued that the property “can move” is one frequently associated with “animal,”
“shark,” and “fish” and that frequency of association rather than cognitive economy
predicts reaction time. There is repeated connection for each nodes in a category for
better retrieval.

• Hierarchical structure
Rips, Shoben, and Smith (1973) showed that participants were faster to verify “A pig
is an animal” than to verify “A pig is a mammal,” thus demonstrating a violation of
predicted hierarchical structure. Familiar terms are verified faster than unfamiliar
terms regardless of their position in the hierarchy.

• Typicality effect

Rips et al. (1973) found that responses to sentences such as “A robin is a bird” were
faster than responses to “A turkey is a bird,” even though these sentences should
have taken an equivalent amount of time to verify. In general, typical instances of a
concept are responded to more quickly than atypical instances. The hierarchical
network model did not predict typicality effects; instead, it predicted that all
instances of a concept should be processed similarly. So, all instances of a concept
are not equally good examples of it.

Network Model

These limitations led Collins and Loftus (1975) present and elaboration of the Collins and
Quillian (1969) hierarchical network model known as spreading activation theory. They
conceived semantic memory as a network with nodes in the network corresponding to
concepts. They also considered related concepts as connected by paths in the network. They
further asserted that when one node is activated, the excitation of that node spreads down
the paths or links to related nodes. They believed that as activation spreads outward, it
decreases in strength, activating very related concepts a great deal but activating distantly
related nodes only a little bit.

In Collins and Loftus (1975) representation of semantic network, each link or connection
between two concepts is thought to have a certain weight or set of weights associated with
it. The weights indicate how important one concept is to the meaning of a concept to which
it is connected. So, very similar concepts- such as “car” and “truck”- have many connecting
links and are placed close to each other. Less similar concepts, such as “house” and “sunset”
(both may, or may not, be red), have no direct connections and are therefore spaced far
apart. Weights may vary for different directions along the connections. Thus, it may be very
important to the meaning of truck that it is a type of vehicle but not very important to the
meaning of vehicle that truck is an example.

Collins and Loftus (1975) dispensed the assumption of cognitive economy and hierarchical
organization, the avoid their model being affected by the limitations of Collins and Quillian
(1969) model. This proposal is regarded more as a descriptive framework than as a specific
model.

Connectionist Models (Parallel Distributed Processing) Of McClelland, Rumelhart, & Hinton

The parallel distributed processing (PDP) approach argues that cognitive processes can be
represented by a model in which activation flows through networks that link together a
large number of simple, neuron-like units (Markman, 1999; McClelland & Rogers, 2003;
Rogers & McClelland, 2004). The researchers who designed this approach took into account
the physiological and structural properties of human neurons (McClelland & Rogers, 2003).

Characteristics of PDP approach:

• Cognitive processes are based on parallel operations, rather than serial operations.
• A network contains basic neuron-like units or nodes, which are connected together
so that a specific node has many links to other nodes.
• A concept is represented by the pattern of activity distributed throughout a set of
nodes.
• The connections between these neuron-like units are weighted, and the connection
weights determine how much activation one unit can pass on to another unit.
• When a unit reaches a critical level of activation, it may affect another unit, either by
exciting it (if the connection weight is positive) or by inhibiting it (if the connection
weight is negative).
• Every new piece of information you learn will change the strength of connections
among relevant units by adjusting the connection weights.
• Sometimes we have only partial memory for some information, rather than
complete, perfect memory. The brain’s ability to provide partial memory is called
graceful degradation. Example: tip-of-the-tongue phenomena.

According to the PDP approach, each individual’s characteristics are connected in a mutually
stimulating network. If the connections among the characteristics are well established
through extensive practice, then an appropriate clue allows you to locate the characteristics
of a specified individual. Advantages of PDP model:

• Spontaneous generalization

It is the process of using individual cases to draw inferences about general


information (Protopapas, 1999; Rogers & McClelland, 2004). Spontaneous
generalization accounts for some of the memory errors and distortions like
development of stereotypes. PDP model argues that we reconstruct a memory, and
that memory sometimes includes inappropriate information (McClelland, 1999).

• Default assignment

The process of fill in missing information about a particular person or a particular


object by making a best guess based on information from other similar people or
objects. Example, information about a particular engineering student.
Both spontaneous generalization and default assignment can produce errors. Theorists
argue that the PDP approach works better for tasks in which several processes typically
operate simultaneously, as in pattern recognition, categorization, and memory search. The
PDP approach has also been used to explain cognitive disorders, such as the reading
problems experienced by people with dyslexia (Levine, 2002; O’Reilly & Munakata, 2000). It
can also account for the cognitive difficulties found in people with schizophrenia (Chen &
Berrios, 1998) and semantic-memory deficits (Leek, 2005; McClelland & Rogers, 2003;
Tippett et al., 1995).

However, some critics say that the PDP model is not currently structured enough to handle
the subtleties and complexities of semantic memory (McClelland & Rogers, 2003). The PDP
approach also has trouble explaining why we sometimes quickly forget extremely well-
learned information that occurs when we learn additional information (Ratcliff, 1990). On
the other hand, the model cannot explain why we sometimes can recall earlier material
when it has been replaced by more current material (Lewandowsky & Li, 1995).

[Refer pdf for PDP model by Rumelhart & Norman, 1988]

Schemas

The term schema usually refers to something larger than an individual concept. Schemata
(the plural of schema) incorporate both general knowledge about the world and information
about particular events. Bartlett (1932) defined a schema as an “active organization of past
reactions, or of past experiences, which must always be supposed to be operating in any
well adapted organic response”. The key term here is organization. A schema is thought to
be a large unit of organized information used for representing concepts, situations, events,
and actions in memory (Rumelhart & Norman, 1988). One main approach to understanding
how concepts are related in the mind is through schemas. They are very similar to semantic
networks, except that schemas are often more task-oriented.

Rumelhart and Ortony (1977) viewed schemata as the fundamental building blocks of
cognition, units of organized knowledge analogous to theories. Generally, they saw
schemata as “packets of information” that contain both variables and a fixed part. Consider
a schema for the concept dog. The fixed part would include the information that a dog is a
mammal, has (typically) four legs, and is domesticated; the variables would be things like
breed (poodle, cocker spaniel, Bernese mountain dog), size (toy, medium, extra-large), color
(white, brown, black, tricolored), temperament (friendly, aloof, vicious), and name (Spot,
Rover, Tandy).

Schemas have several characteristics that ensure wide flexibility in their use (Rumelhart &
Ortony, 1977; Thorndyke, 1984):

1. Schemas can include other schemas. For example, a schema for animals includes a
schema for cows, a schema for apes, and so on.

2. Schemas encompass typical, general facts that can vary slightly from one specific instance
to another. For example, although the schema for mammals includes a general fact that
mammals typically have fur, it allows for humans, who are less hairy than most other
mammals. It also allows for porcupines, which seem more prickly than furry, and for marine
mammals like whales that have just a few bristly hairs.

3. Schemas can vary in their degree of abstraction. For example, a schema for justice is
much more abstract than a schema for apple or even a schema for fruit.

Schemas also can include information about relationships (Komatsu, 1992). Some of this
information includes relationships among the following:

• concepts (e.g., the link between trucks and cars);

• attributes within concepts (e.g., the height and the weight of an elephant);

• attributes in related concepts (e.g., the redness of a cherry and the redness of an apple);

• concepts and particular contexts (e.g., fish and the ocean); and

• specific concepts and general background knowledge (e.g., concepts about particular U.S.
presidents and general knowledge about the U.S. government and about U.S. history).

Schemata can also indicate the relationships among the various pieces of information. For
example, to end up with a dog, the “parts” of the dog (tail, legs, tongue, teeth) must be put
together in a certain way. A creature with the four legs coming out of its head, its tail
sticking out of its nose, and its tongue on the underside of its belly would not “count” as an
instance of a dog, even if all the required dog parts were present.

Moreover, schemata can be connected to other schemata in a variety of ways. Schemata


also exist for things bigger than individual concepts. Furthermore, schemata fill in default
values for certain aspects of the situation, which let us make certain assumptions. Schemata
are assumed to exist at all levels of abstraction; thus schemata can exist for small parts of
knowledge (what letter does a particular configuration of ink form?) and for very large parts
(what is the theory of relativity?). They are thought of as active processes rather than as
passive units of knowledge. They are not simply called up from memory and passively
processed. Instead, people are thought to be constantly assessing and evaluating the fit
between their current situation and a number of relevant schemata and subschemata.

Some researchers think schemata are used in just about every aspect of cognition.
Schemata are deemed to play an important role in perception and pattern matching as we
try to identify the objects we see before us. They are considered important in memory
functioning as we call to mind relevant information to help us interpret current information
and make decisions about what to do next. A problem with schemas is that they can give
rise to stereotypes.

UNIT 3: REASONING: INDUCTIVE & DEDUCTIVE REASONING, COGNITIVE ERRORS.

UNIT 4: CREATIVITY: FEATURES OF CREATIVE THINKING, CONVERGENT & DIVERGENT


THINKING, PRODUCTIVE AND REPRODUCTIVE THINKING, INSIGHT.

Creativity is a cognitive activity that results in a new or novel way of viewing a problem or
situation. Creativity is widely heralded as an important part of everyday life and education.
Papilla and Olds (1987) have defined creativity as “the ability to see things in a new and
unusual light, to see problems that no one else may even realize exist and then to come up
with new unusual and effective solutions.” Creative thinking tends to occur when people
engage in some tasks or activity that is more or less automatic, such as walking and
swimming. Selective attention permits creative thinking. The ability to be creative is
important but is often a neglected topic in the education of young individuals. The
characteristics of creative thinking are:
i. It is universal.
ii. It can be enhanced.
iii. It is self-expression and carries ego-involvement.

FACTORS
AFFECTING
CREATIVITY

ANALYTICAL
INTELLIGENCE KNOWLEDGE IMAGINATION
THINKING

Characteristics of creative individuals are elaborated below:

• One is extremely high motivation to be creative in a particular field of endeavor (e.g.,


for the sheer
• enjoyment of the creative process).
• A second factor is both non-conformity in violating any conventions that might
inhibit the creative work and dedication in maintaining standards of excellence and
self-discipline related to the creative work.
• A third factor in creativity is deep belief in the value of the creative work, as well as
willingness to criticize and improve the work.
• A fourth is careful choice of the problems or subjects on which to focus creative
attention.
• A fifth characteristic of creativity is thought processes characterized by both insight
and divergent thinking. A sixth factor is risk taking.
• The final two factors in creativity are extensive knowledge of the relevant domain
and profound commitment to the creative endeavor.

In addition, the historical context and the domain and field of endeavor influence the
expression of creativity.

Convergent & Divergent Thinking

J. P. Guilford (1967) distinguished between two types of thinking: convergent thinking and
divergent thinking. Convergent thinking moves in a straightforward manner to a particular
conclusion. Much of pedagogy emphasizes convergent thinking, in which students are asked
to recall factual information, such as What is the capital of Bulgaria? It is the type of thinking
in which a problem is seen as having only one answer, and all lines of thinking will
eventually lead to that single answer, using previous knowledge and logic. Convergent
thinking makes us observe that a pen and pencil can be used to write, have similar shapes,
and so on. Convergent thinking works well for routine problem solving but may be of little
use when a more creative solution is needed.

Divergent thinking requires a person to generate many different answers to a question, the
“correctness” of the answers being somewhat subjective. For example: For how many
different things can you use a brick? It is the reverse of convergent thinking. Divergent
thinking is a type of creative thinking in which a person starts from one point and comes
with many different ideas or possibilities based on that point. Divergent thinking has been
attributed not only to creativity but also to intelligence (Guilford, 1967). Divergent thinking
is also an effective measure in solving the barriers to problem solving, such as functional
fixedness. Divergent or more creative answers may utilize objects or ideas in more abstract
terms. The divergent thinker is more flexible in his or her thinking.

Productive and Reproductive Thinking

Gestalt psychologists emphasized the importance of the whole as more than a collection of
parts. In regard to problem solving, Gestalt psychologists held that insight problems require
problem solvers to perceive the problem as a whole. Gestalt psychologist Max Wertheimer
(1945/1959) wrote about productive thinking, which involves insights that go beyond the
bounds of existing associations. He distinguished it from reproductive thinking, which is
based on existing associations involving what is already known. According to Wertheimer,
insightful (productive) thinking differs fundamentally from reproductive thinking.

Productive thinking is characterized by shifts in perspective which allow the problem solver
to consider new, sometimes transformational, approaches. In other words, productive
thinking is solving a problem with insight. This is a quick insightful unplanned response to
situations and environmental interaction.
Re‐productive thinking, on the other hand, involves the application of familiar, routine,
procedures. In other words, reproductive thinking is solving a problem with previous
experiences and what is already known.

Insight

Insight is a change in frame of reference or in the way elements of the problem are
interpreted and organized. The process by which insight occurs is not well understood. The
notion of insight may be associated with certain cognitive processes that are known to
originate in the right hemisphere of the brain. Experiences of insight that lead to creative
problem solving often are nonverbal, “aha” moments.

In the century Gestalt psychologist Wolfgang Kohler studied chimpanzees’ ability to solve
problems. In a famous study involving a chimp named Sultan, Kohler (1925) demonstrated
that chimpanzees can experience insight into solving a problem. In this particular study, the
goal was for Sultan to reach a banana suspended from the ceiling of the cage. The cage was
equipped with some boxes and pieces of stick, none of which alone could be used to access
the food. After several unsuccessful attempts, Sultan was reported to have the “insight” to
combine the sticks into a larger stick, which ultimately lead to a successful attempt.

Insight influence various cognitive processes like creativity, problem solving, illusions and
many more. It is a very important concept in Gestalt Psychology. They argued that the parts
of a problem may initially seem unrelated to one another, but a sudden flash of insight
could make the parts instantly fit together into a solution (Davidson, 2003). Behaviorists
rejected the concept of insight because the idea of a sudden cognitive reorganization was
not compatible with their emphasis on observable behavior. Furthermore, some
contemporary psychologists prefer to think that people solve problems in a gradual, orderly
fashion. These psychologists are uneasy about the sudden transformation that is suggested
by the concept of insight (Metcalfe, 1998). Currently, then, some psychologists favor the
concept of insight, and others reject this concept.

According to the psychologists who favor the concept of insight, people who are working on
an insight problem usually hold some inappropriate assumptions when they begin to solve
the problem (Chi, 2006; Ormerod et al., 2006). In other words, top-down processing
inappropriately dominated your thinking, and you were considering the wrong set of
alternatives (Ormerod et al., 2006).

Great artistic, musical, scientific, or other discoveries often seem to share a critical moment,
a mental “Eureka” experience when the proverbial “lightbulb” goes on. Many biographies of
composers, artists, scientists, and other eminent experts begin with “Eureka” stories
(Perkins, 1981). According to Smith (1995a), insights need not be sudden “a-ha”
experiences. They may and often do occur gradually and incrementally over time. When an
insightful solution is needed but not forthcoming, sleep may help produce a solution. In
both mathematical problem solving and solution of a task that requires understanding
underlying rules, sleep has been shown to increase the likelihood that an insight will be
produced (Stickgold & Walker, 2004; Wagner et al., 2004).

Problems are divided into two types based on insight; insight problem and non-insight
problem. An insight problem is a problem that initially seems impossible to solve, but a
correct alternative suddenly enters a person’s mind. On the other hand, non-insight
problems are problems that are solved gradually, using memory, reasoning skills, and a
routine set of strategies.

In some cases, the best way to solve an insight problem is to stop thinking about this
problem and do something else for a while. Many artists, scientists, and other creative
people believe that incubation helps them solve problems creatively. Incubation is defined
as a situation in which you are initially unsuccessful in solving a problem, but you are more
likely to solve the problem after taking a break, rather than continuing to work on the
problem without interruption (Perkins, 2001; Segal, 2004). Incubation sounds like a
plausible strategy for solving insight problems (e.g., Csikszentmihalyi, 1996). Unfortunately,
however, the well-controlled laboratory research shows that incubation is not consistently
helpful (Perkins, 2001; Segal, 2004; Ward, 2001).

Neuroscience studies indicate that the right hippocampus is critical in the formation of an
insightful solution (Luo & Niki, 2003). Another study demonstrated a spike of activity in the
right anterior temporal area immediately before an insight is formed. This area is active
during all types of problem solving, as it involves making connections among distantly
related items (Jung-Beeman et al., 2004).

UNIT 5: PSYCHOLINGUISTICS: (LANGUAGE AND THOUGHT) LINGUISTIC RELATIVITY & VERBAL


DEPRIVATION HYPOTHESES. THEORIES OF LANGUAGE ACQUISITION: SKINNER-
BEHAVIOURISM, CHOMSKY (LAD) LENNEBERG-GENETIC READINESS.

Psycholinguistics: (Language and Thought)

One of the most interesting areas in the study of language is the relationship between
language and the thinking of the human mind (Harris, 2003). Many people believe that
language shapes thoughts.

A famous hypothesis, outlined by Benjamin Whorf (1956), asserts that the categories and
relations that we use to understand the world come from our particular language, so that
speakers of different languages conceptualise the world in different ways. Language
acquisition, then, would be learning to think, not just learning to talk. This is an intriguing
hypothesis, but virtually all modern cognitive scientists believe it is false (see Pinker, 1994a).
Babies can think before they can talk. Cognitive psychology has shown that people think not
just in words but in images and abstract logical propositions. Language acquisition has a
unique contribution to make to this issue.

Psycholinguistics is the study of the mental aspects of language and speech. It is primarily
concerned with the ways in which language is represented and processed in the brain. A
branch of both linguistics and psychology, psycholinguistics is part of the field of cognitive
science.

Early philosophers in Greece and India debated the nature of language (Chomsky, 2000).
Centuries later, both Wilhelm Wundt and William James also speculated about our
impressive abilities in this area (Carroll, 2004; Levelt, 1998). However, the current discipline
of psycholinguistics can be traced to the 1960s, when psycholinguists began to test whether
psychological research would support the theories of Noam Chomsky (McKoon & Ratcliff,
1998).
Linguistic Relativity Hypothesis

The concept relevant to the question of whether language influences thinking is linguistic
relativity. Linguistic relativity refers to the assertion that speakers of different languages
have differing cognitive systems and that these different cognitive systems influence the
ways in which people think about the world.

The linguistic-relativity hypothesis is sometimes referred to as the Sapir-Whorf hypothesis,


named after the two men who were most forceful in propagating it. Edward Sapir
(1941/1964) said that “we see and hear and otherwise experience very largely as we do
because the language habits of our community predispose certain choices of interpretation”
(p. 69). Benjamin Lee Whorf (1956) stated this view even more strongly: “We dissect nature
along lines laid down by our native languages”.

Whorf concluded that a thing represented by a word is conceived differently by people


whose languages differ and that the nature of the language itself is the cause of those
different ways of viewing reality. For example, Whorf studied Native American languages
and found that clear translation from one language to another was impossible.

The Sapir-Whorf hypothesis has been one of the most widely discussed ideas in all of the
social and behavioral sciences (Lonner, 1989). However, some of its implications appear to
have reached mythical proportions. For example, “many social scientists have warmly
accepted and gladly propagated the notion that Eskimos have multitudinous words for the
single English word snow. Contrary to popular beliefs, Eskimos do not have numerous words
for snow (Martin, 1986).

Our thoughts and our language interact in myriad ways, only some of which we now
understand. Clearly, language facilitates thought; it even affects perception and memory.
For some reason, we have limited means by which to manipulate non-linguistic images
(Hunt & Banaji, 1988). Such limitations make desirable the use of language to facilitate
mental representation and manipulation. Even nonsense pictures (“droodles”) are recalled
and redrawn differently, depending on the verbal label given to the picture (Bower, Karlin, &
Dueck, 1975). Language also affects how we encode, store, and retrieve information in
memory.
Verbal Deprivation Hypothesis

The Verbal Deprivation Hypothesis is proposed in 1973 by British sociologist Basil Bernstein.
According to APA, the hypothesis states that children who are denied regular experience of
an elaborated code of language—that is, a more formal use of language involving complex
constructions and an unpredictable vocabulary—may develop an educational and even
cognitive deficit. The concept is controversial as it has been associated with the view that
nonstandard or vernacular forms of a language (e.g., Black English) are inherently inferior.
The idea that nonstandard forms inhibit higher level cognitive processes (e.g., abstract
reasoning) is now discredited, but concerns remain that lack of early exposure to the more
formal codes of a language appears to correlate with educational underachievement.

The hypothesis postulates the existence of two codes of speech, a 'restricted code' and an
'elaborated code'. The distinction between the two is made in a number of ways; in terms of
lexicon grammar, logic and performance, but what it amounts to basically is this: the
elaborated code is the vehicle of rationality. In the elaborated code one can reason, plan
ahead, take account of the views of other people, have access to scientific and literary
concepts. Reason and argument need the resources of the elaborated code.

The restricted code on the other hand embodies authority, group solidarity and coercion. It
is the antithesis of the elaborated code. Bernstein and his associates claim to have shown
both the existence of these two codes, and their location among different sections of the
population. The restricted code is most commonly used by the "unskilled working class', and
since its powers of expression do not fit its users for success in activities involving the use of
reason, they achieve poorly in the educational system, unless they are able to switch to an
'elaborated code'.

Bernstein does not actually bring evidence that some sections of the working class cannot
put forward an argument, cannot give reasons, cannot take account of another person's
point of view, cannot plan ahead, or discuss topics of other than immediate concern. He and
his associates’ content themselves with irrelevant quantitative data giving comparisons
between 'middle class' and 'working class' speakers on such items as pause between speech,
frequency of occurrence of pronouns, adjectives, and auxiliary expressions, in speech. He
does not succeed in showing that at any fundamental level there exist the two codes
claimed to exist. Space forbids me to show on detail just how weak Bernstein's evidence is
and how misconceived his criteria are.

Theories of Language Acquisition: Skinner- Behaviourism, Chomsky (LAD) Lenneberg-


Genetic Readiness.

Skinner- Behaviourism

The behaviourist psychologists developed their theories while carrying out a series of
experiments on animals. They observed that rats or birds, for example, could be taught to
perform various tasks by encouraging habit-forming. Researchers rewarded desirable
behaviour. This was known as positive reinforcement. Undesirable behaviour was punished
or simply not rewarded — negative reinforcement. The behaviourist B. F. Skinner then
proposed this theory as an explanation for language acquisition in humans. In Verbal
Behaviour (1957), he stated: “The basic processes and relations which give verbal behaviour
its special characteristics are now fairly well understood. Much of the experimental work
responsible for this advance has been carried out on other species, but the results have
proved to be surprisingly free of species restrictions. Recent work has shown that the
methods can be extended to human behaviour without serious modifications.” (cited in
Lowe and Graham, 1998, p.68)

Skinner suggested that a child imitates the language of its parents or carers. Successful
attempts are rewarded because an adult who recognises a word spoken by a child will
praise the child and/or give it what it is asking for. The linguistic input was key — a model
for imitation to be either negatively or positively reinforced. Successful utterances are
therefore reinforced while unsuccessful ones are forgotten. No essential difference
between the way a rat learns to negotiate a maze and a child learns to speak.

Limitations

While there must be some truth in Skinner’s explanation, there are many objections to it.
Language is based on a set of structures or rules, which could not be worked out simply by
imitating individual utterances. The mistakes made by children reveal that they are not
simply imitating but actively working out and applying rules.

For example, a child who says “drinked” instead of “drank” is not copying an adult but
rather over-applying a rule.

The vast majority of children go through the same stages of language acquisition.

Apart from certain extreme cases, the sequence seems to be largely unaffected by the
treatment the child receives or the type of society in which s/he grows up.

Children are often unable to repeat what an adult says, especially if the adult utterance
contains a structure the child has not yet started to use.

Few children receive much explicit grammatical correction. Parents are more interested in
politeness and truthfulness. According to Brown, Cazden & Bellugi (1969): “It seems to be
truth value rather than well-formed syntax that chiefly governs explicit verbal reinforcement
by parents — which renders mildly paradoxical the fact that the usual product of such a
training schedule is an adult whose speech is highly grammatical but not notably truthful.”
(cited in Lowe and Graham, 1998)

There is evidence for a critical period for language acquisition. Children who have not
acquired language by the age of about seven will never entirely catch up. The most famous
example is that of Genie, discovered in 1970 at the age of 13. She had been severely
neglected, brought up in isolation and deprived of normal human contact. Of course, she
was disturbed and underdeveloped in many ways. During subsequent attempts at
rehabilitation, her caretakers tried to teach her to speak. Despite some success, mainly in
learning vocabulary, she never became a fluent speaker, failing to acquire the grammatical
competence of the average five-year-old.

Noam Chomsky- Innateness Theory (LAD- Language Acquisition Device)

Noam Chomsky published a criticism of the behaviourist theory in 1957. In addition to some
of the arguments listed above, he focused particularly on the impoverished language input
children receive. This theory is connected with the writings of Chomsky, although the theory
has been around for hundreds of years.

Children are born with an innate capacity for learning human language. Humans are
destined to speak. Children discover the grammar of their language based on their own
inborn grammar. Certain aspects of language structure seem to be preordained by the
cognitive structure of the human mind. This accounts for certain very basic universal
features of language structure: every language has nouns/verbs, consonants and vowels. It
is assumed that children are pre-programmed, hard-wired, to acquire such things.

Yet no one has been able to explain how quickly and perfectly all children acquire their
native language. Every language is extremely complex, full of subtle distinctions that
speakers are not even aware of. Nevertheless, children master their native language in 5 or
6 years regardless of their other talents and general intellectual ability. Acquisition must
certainly be more than mere imitation; it also doesn’t seem to depend on levels of general
intelligence, since even a severely retarded child will acquire a native language without
special training. Some innate feature of the mind must be responsible for the universally
rapid and natural acquisition of language by any young child exposed to speech.

Chomsky concluded that children must have an inborn faculty for language acquisition.
According to this theory, the process is biologically determined – the human species has
evolved a brain whose neural circuits contain linguistic information at birth. The child’s
natural predisposition to learn language is triggered by hearing speech and the child’s brain
is able to interpret what s/he hears according to the underlying principles or structures it
already contains.

This natural faculty has become known as the Language Acquisition Device (LAD). Chomsky
did not suggest that an English child is born knowing anything specific about English, of
course. He stated that all human languages share common principles. (For example, they all
have words for things and actions — nouns and verbs.) It is the child’s task to establish how
the specific language s/he hears expresses these underlying principles.

For example, the LAD already contains the concept of verb tense. By listening to such forms
as “worked”, “played” and “patted”, the child will form the hypothesis that the past tense of
verbs is formed by adding the sound /d/, /t/ or / id/ to the base form. This, in turn, will lead
to the “virtuous errors” mentioned above. It hardly needs saying that the process is
unconscious. Chomsky does not envisage the small child lying in its cot working out
grammatical rules

consciously!

Chomsky’s ground-breaking theory remains at the centre of the debate about language
acquisition. However, it has been modified, both by Chomsky himself and by others.
Chomsky’s original position was that the LAD contained specific knowledge about language.
Dan Isaac Slobin has proposed that it may be more like a mechanism for working out the
rules of language:

“It seems to me that the child is born not with a set of linguistic categories but with some
sort of process mechanism — a set of procedures and inference rules, if you will - that he
uses to process linguistic data. These mechanisms are such that, applying them to the input
data, the child ends up with something which is a member of the class of human languages.
The linguistic universals, then, are the result of an innate cognitive competence rather than
the content of such a competence” (cited in Russell, 2001).

Evidence to Support Innateness Theory

Work in several areas of language study has provided support for the idea of an innate
language faculty. Three types of evidence are offered here:

1) Slobin has pointed out that human anatomy is peculiarly adapted to the production of
speech. Unlike our nearest relatives, the great apes, we have evolved a vocal tract which
allows the precise articulation of a wide repertoire of vocal sounds.

2) Neuro-science has also identified specific areas of the brain with distinctly linguistic
functions, notably Broca’s area and Wernicke’s area. Stroke victims provide valuable data:
depending on the site of brain damage, they may suffer a range of language dysfunction,
from problems with finding words to an inability to interpret syntax.
3) Experiments aimed at teaching chimpanzees to communicate using plastic symbols or
manual gestures have proved controversial. It seems likely that our ape cousins, while able
to learn individual “words”, have little or no grammatical competence. Pinker (1994) offers a
good account of this research.

The formation of creole varieties of English appears to be the result of the LAD at work. The
linguist Derek Bickerton has studied the formation of Dutch-based creoles in Surinam.
Escaped slaves, living together but originally from different language groups, were forced to
communicate in their very limited Dutch.

The result was the restricted form of language known as a pidgin. The adult speakers were
past the critical age at which they could learn a new language fluently — they had learned
Dutch as a foreign language and under unfavourable conditions. Remarkably, the children of
these slaves turned the pidgin into a full language, known by linguists as a creole. They were
presumably unaware of the process but the outcome was a language variety which follows
its own consistent rules and has a full expressive range. Creoles based on English are also
found, in the Caribbean and elsewhere.

Studies of the sign languages used by the deaf have shown that, far from being crude
gestures replacing spoken words, these are complex, fully grammatical languages in their
own right. A sign language may exist in several dialects.

Children learning to sign as a first language pass through similar stages to hearing children
learning spoken language. Deprived of speech, the urge to communicate is realised through
a manual system which fulfils the same function. There is even a signing creole, again
developed by children, in Nicaragua (Pinker, 1994).

Limitations of Chomsky’s Theory

Chomsky’s work on language was theoretical. He was interested in grammar and much of
his work consists of complex explanations of grammatical rules. He did not study real
children. The theory relies on children being exposed to language but takes no account of
the interaction between children and their caretakers. Nor does it recognise the reasons
why a child might want to speak, the functions of language.
In 1977, Bard and Sachs published a study of a child known as Jim, the hearing son of deaf
parents. Jim’s parents wanted their son to learn speech rather than the sign language they
used between themselves. He watched a lot of television and listened to the radio,
therefore receiving frequent language input. However, his progress was limited until a
speech therapist was enlisted to work with him.

Simply being exposed to language was not enough. Without the associated interaction, it
meant little to him.

Subsequent theories have placed greater emphasis on the ways in which real children
develop language to fulfil their needs and interact with their environment, including other
people.

Lenneberg-Genetic Readiness

The critical period hypothesis is the subject of a long-standing debate


in linguistics and language acquisition over the extent to which the ability to
acquire language is biologically linked to age. The hypothesis claims that there is an ideal
time window to acquire language in a linguistically rich environment, after which further
language acquisition becomes much more difficult and effortful. The critical period
hypothesis was first proposed by Montreal neurologist Wilder Penfield and co-author Lamar
Roberts in their 1959 book Speech and Brain Mechanisms,[1] and was popularized by Eric
Lenneberg in 1967 with Biological Foundations of Language.[2]

The critical period hypothesis states that the first few years of life is the crucial time in
which an individual can acquire a first language if presented with adequate stimuli, and that
first-language acquisition relies on neuroplasticity. If language input does not occur until
after this time, the individual will never achieve a full command of language. [3] There is
much debate over the timing of the critical period with respect to SLA, with estimates
ranging between 2 and 13 years of age.[4]

The critical period hypothesis is derived from the concept of a critical period in the biological
sciences, which refers to a set period in which an organism must acquire a skill or ability, or
said organism will not be able to acquire it later in life. Strictly speaking, the experimentally
verified critical period relates to a time span during which damage to the development of
the visual system can occur, for example if animals are deprived of the necessary binocular
input for developing stereopsis.

Preliminary research into the Critical period hypothesis investigated brain lateralization as a
possible neurological cause,[5] however this theoretical cause was largely discredited since
lateralization does not necessarily increase with age, and no definitive link between
language learning ability and lateralization was ever determined.[6] Recently, it has been
suggested that if a critical period does exist, it may be due at least partially to the delayed
development of the prefrontal cortex in human children.[7][8] Researchers have suggested
that delayed development of the prefrontal cortex and an associated delay in the
development of cognitive control may facilitate convention learning, allowing young
children to learn language far more easily than cognitively mature adults and older children.
This pattern of prefrontal development is unique to humans among similar mammalian (and
primate) species, and may explain why humans—and not chimpanzees—are so adept at
learning language.
MODULE 6: APPLYING COGNITIVE PSYCHOLOGY CONCEPTS TO EVERYDAY LIFE (TO READ:
NOT TO BE INCLUDED IN SHORT AND LONG ESSAYS)

UNIT 1: TOP-DOWN INFLUENCE OF MOTIVATION & LEARNING AND ROLE OF CULTURE ON


ATTENTION, PERCEPTION AND MEMORY.

[Refer Galloti, pg no: 377- 385 for Role of Culture on Perception and Memory]

[Refer Gallotti, pg no: 55 for Top-Down Influences of Learning]

UNIT 2: VISUO-SPATIAL SUB-CODES, CONTRIBUTIONS OF HUBEL &WIESEL. PERCEPTUAL


ORGANIZATION (GESTALT LAWS)

Visuo-Spatial Sub-Codes

[Refer Aqueen ppt]

Contributions of Hubel &Wiesel

During 1964, David Hubel and Torsten Wiesel studied the short- and long-term
effects of depriving kittens of vision in one eye. In their experiments, Wiesel and
Hubel used kittens as models for human children. Hubel and Wiesel researched
whether the impairment of vision in one eye could be repaired or not and whether
such impairments would impact vision later on in life. The researchers sewed one
eye of a kitten shut for varying periods of time. They found that when vision
impairments occurred to the kittens right after birth, their vision was significantly
affected later on in life, as the cells that were responsible for processing visual
information redistributed to favor the unimpaired eye. Hubel and Wiesel worked
together for over twenty years and received the 1981 Nobel Prize for Physiology or
Medicine for their research on the critical period for mammalian visual system
development. Hubel and Wiesel’s experiments with kittens showed that there is a
critical period during which the visual system develops in mammals, and it also
showed that any impairment of that system during that time will affect the lifelong
vision of a mammal.
In 1959, Hubel and Wiesel conducted a series of experiments on cats and kittens as
models for humans, and in the 1970s they repeated the experiments on primates.
Their collaboration lasted for over twenty years, during which time Hubel and
Wiesel elucidated details about the development of the visual system.

In 1964 at the time the article was published, surgeons operated on individuals with
congenital cataracts, a disorder in which the lens of the eye is clouded upon birth,
later in those individuals’ lives rather than at birth. Those individuals required
intensive treatment after surgery, as there was still impairment to vision in the
affected eye. Hubel and Wiesel questioned why their vision remained impaired.
Hubel and Wiesel hypothesized that there was a time period dur ing which the visual
nerve cells develop and that if the retina did not receive any visual information at
that time, the cells of the visual cortex redistribute their response in favor of the
working eye. By 1964, Hubel and Wiesel performed a set of experiments to test their
hypothesis. Other researchers had studied the behavior and vision of animals after
they were raised in the dark, but Hubel and Wiesel were the first to study animal
behavior after physically suturing one of the eyes, thus further reduci ng the visual
input to the retina.

For the purpose of the experiment, Hubel and Wiesel used newborn kittens and
sutured one of their eyes shut for the first three months of their lives. The sutured
eye did not get any visual information and received 10,000 to 100,000 times less
light than the normal eye. That meant that there was no visual information for the
retina of the sutured eye to record and thus the visual cortex could not receive any
input from that eye. Hubel and Wiesel used four kittens for the e xperiment.

After three months, Hubel and Wiesel opened the sutured eyes, and recorded the
changes. They found a noticeable difference in cortical cell response. The
researchers recorded the activity of the visual system in each kitten by inserting a
tungsten electrode into the sedated kitten’s visual cortex of the brain, which let
them monitor the activity of each cortical cell separately. The tungsten rod detected
electrical activity or inactivity in the cortex, which indicated whether or not the
visual cortex retrieved information from the previously sutured eye. By recording
electrical activity in the kittens’ visual cortex, Hubel and Wiesel observed how the
cells of the visual cortex reacted to different stimuli from both eyes and whether or
not there was a difference in the signals from the previously sutured eye and the
normal eye.

Next, Wiesel and Hubel showed the kittens different patterns of light to stimulate
the cortical cells. Normally, about eighty-five percent of cortical cells respond
identically to both eyes in a mammal with normal vision and only fifteen percent of
those cells respond to one eye only. However, when Hubel and Wiesel performed
the experiment on kittens with previously sutured eyes, they found that one out of
eighty-four cells responded to the previously sutured eye and the other eighty -three
cells responded to the normal eye only. That meant that the cortical cells
redistributed to favor the normal eye, as it was their onl y source of visual
information during the early development of the kitten. The researchers also noted
that all kittens who had one of their eyes sutured had some cortical cells that did
not respond to any stimuli at all. The researchers concluded that thos e cells were
likely only associated with the previously sutured eye. Because those cells did not
respond at all to any visual stimuli, they had not regenerated and could not be used
again. That meant that some cortical neuron function can be fully lost if a vision
impairment occurs during visual system development.

Hubel and Wiesel also performed a simple vision test on the kittens. They put an
opaque barrier on one eye of the kitten and monitored th e kitten’s movement. They
later repeated the same procedure for the other eye. The researchers noted that
when the kittens were allowed to see with the previously sutured eye, they were
uncoordinated and showed no signs of vision. However, the normal eye f unctioned
properly and the researchers noted no impairment. Those findings meant that the
previously sutured eye had lost its vision function and was not able to recover upon
being open, which provided further evidence that previous vision deprivation affe cts
long-term vision. Hubel and Wiesel concluded that an abnormality occurred
somewhere within the visual pathway from the eye to the brain that caused the
cortical neurons to redistribute and function only with the normal eye.
Hubel and Wiesel investigated where in the vision pathway the abnormality of vision
cells came from. They sought to know whether the abnormality was a cortical or a
geniculate abnormality, as that information would reveal how the vision pathway
works. Another question that they asked was whether or not depriving the kittens of
light or form (sight of object) caused the abnormality in the vision pathway. Their
research aimed to explain how the deprivation of either one related to the
continuous vision impairment in children after surgery. Hubel and Wiesel also
questioned if the kittens’ visual system reacted to the visual impairment the same
way the system of an older or an adult cat would. Their findings sought to explain
whether the connections made by the visual system before birth w ere innate or
developed after birth. Finally, Hubel and Wiesel questioned whether the neural
connections would deteriorate if an impairment was present, or whether the neural
connections could not develop in the presence of an impairment. To answer those
questions, Hubel and Wiesel performed multiple experiments with kittens and adult
cats.

Following the vision tests, Hubel and Wiesel sought to answer where the
abnormality occurred and how it worked. They checked the lateral geniculate body,
which is a transfer site in the thalamus that receives visual information from the
retina and transfers it to the occipital lobe of the brain. The cells in the lateral
geniculate body normally respond more to one eye than the other. The vast majority
of the geniculate cells that were associated with the previously sutured eye were
intact and worked properly. However, upon analyzing those cells with a microscope,
Hubel and Wiesel found that the cross sectional area of the lateral geniculate body
had shrunk an average of forty percent and that some geniculate cells were smaller
and contained little substance inside. That meant that the cells were not being used
nearly as much as they could have been, causing the entire area to atrophy. The
lateral geniculate body atrophied because it was receiving only half of its normal
visual information, but it continued to transfer visual information from the eye to
the brain. The researchers found no other physical abnormalities anywhere along
the visual pathway. Hubel and Wiesel concluded that the abnormality that caused
vision loss of the sutured eye likely occurred somewhere in the cortex of the brain,
which was the last stop in the visual pathway.

Next, Hubel and Wiesel investigated whether the visual impairment in the kittens
was caused by the deprivation of light or the depreciation of viewing forms. Light
refers to colors as well as light or dark perception of the eye, while form refers to
recognizing shapes of different objects. To determine the cause of the visual
impairment, the researchers took the newborn kittens and put an opaque barrier
over one of their eyes, which reduced the incoming amount of light to only ten to
one hundred times. However, the barrier did not allow the kittens to distinguish
forms or shapes. The results indicated that cortical cells only responded to the open
eye, but the morphological changes in the lateral geniculate body cells were
significantly reduced. Those findings suggested that cortical cells redistributed due
to form deprivation, while the morphological abnormalities of the lateral geniculate
body were due to light deprivation.

Next, Hubel and Wiesel investigated whether those visual effects would be
replicated in older kittens that had already experienced vision. For that purpose,
they sutured the eye of kittens shut at nine weeks of age for one month. Upon
opening the eye, the researchers found that the distribution of cortical cells
between eyes was still largely in favor of the open eye. However, there was almost
no difference to the lateral geniculate body size. That, once again, established that
the source of abnormality was cortical and not geniculate.

The researchers also tried the experiment with adult cats. They observed after
visually depriving adult cats for several months, that the cats did not display any
changes in cortical cell distribution or changes in the morphology of their lateral
geniculate bodies. Hubel and Wiesel concluded that younger kittens were most at
risk for developing cortical abnormalities and, thus, blindness. That risk declined
with every month of life and was almost non-existent in adults. Hubel and Wiesel
found that there was a period at the beginning of kitten’s life when the ability to
view light and forms was most important for development.
Finally, Hubel and Wiesel researched whether visual pathway connections were
present at birth and deteriorated with disuse or whether they did not develo p if not
used early on. To determine that, they experimented with three more kittens. The
researchers closed the eye of one of the kittens when the kitten was eight days old,
which is about the time that eyes first start to open in kittens. They closed the eyes
of the other two kittens after one to two weeks of age. The researchers studied the
electrical connections in the brain at birth for all three kittens and found that their
cortical cells responded to visual stimuli similarly to those in adult cats. T his
observation meant that the cortical cells had some ocular dominance. However, the
cats could recognize the stimuli from both eyes. Hubel and Wiesel studied the same
electrical connections in the brain later, after reopening the sutured eyes, and found
that they had deteriorated and that cortical cells had redistributed in favor of the
normal eye yet again. Hubel and Wiesel concluded that the neural pathways in the
visual system are present at birth and deteriorate with disuse.

Hubel and Wiesel’s experiment helped uncover how the visual system develops in
mammals. First, they found a critical period during which the visual system
developed and learned that the deprivation of vision during that time could impair
vision forever. The conclusions of Hubel and Wiesel’s experiment led surgeons to
operate on congenital cataracts as soon as the infant was diagnosed. In 1981, Hubel
and Wiesel received a Nobel Prize for Physiology or Medicine for their research on
the development of the visual system.

Perceptual Organization (Gestalt Laws)

[Refer BSc notes and CP notes pdf]

UNIT 3: SUBLIMINAL PERCEPTION, PERCEPTUAL DEFENSE, SYNESTHESIA

Subliminal Perception

[Refer BSc notes]

Subliminal perception is the influence of stimuli that are insufficiently intense to produce a
conscious sensation but strong enough to influence some mental processes. Literally,
subliminal is below the sensory threshold, thus imperceptible. However, subliminal
perception often refers to stimuli that are clearly strong enough to be above the
psychological limen but do not enter consciousness, and is technically referred to as
subraliminal (above limen). Public interest in subliminal messages began in the late 1950s
when advertisers briefly flashed messages between frames at a movie theater in attempts
to increase sales of popcorn and soda. Since then, subliminal messages have been reported
in a variety of sources such as naked women in ice cubes, demonic messages in rock ‘n roll
music, and recent presidential campaign advertisements.

The topic of subliminal perception is closely related to perceptual priming, in which the
display of a word is so brief that the participant cannot report seeing it, however, the word
actually facilitates the recognition of an associate to that word without any conscious
awareness of the process. Furthermore, several studies (Philpott & Wilding, 1979;
Underwood, 1976, 1977) have shown that subliminal stimuli have an effect on the
recognition of subsequent stimuli. Therefore, some effect of the subliminal stimuli is
observed.

Filter location. Contemporary models of attention focus on where the selection (or filtering
out) of information takes place in the cognitive process. Inherent in many of these filter
theories is the notion that people are not aware of signals in the early part of processing of
information but, after some type of decision or selection, pass some of the signals on for
further processing. The models typically differ based on early or late selection depending on
where the filter location is hypothesized.

Perceptual Defense

• The process by which stimuli that are potentially threatening, offensive, or


unpleasant are either not perceived or are distorted in perception, especially when
presented as brief flashes.

• A person may build a defense (a block or a refusal to recognize) against stimuli or


situational events in the context that are person or culturally unacceptable or
threatening.
• Study- College students college students were presented with the word “intelligent”
as a characteristic of a factory worker. This was counter to their perception of
factory workers, and they built defenses in the following ways:

1. Denial: A few of the subjects denied the existence of intelligence in factory workers.

2. Modification and distortion: This was one of the most frequent forms of defense.
The pattern was to explain away the perceptual conflict by joining intelligence with
some other characteristic, for example, “He is intelligent, but doesn’t possess the
initiative to rise above his group.

3. Change in perception: Many of the students changed their perception of the worker
because of the intelligence characteristic. The change, however, was usually very
subtle; example, “He cracks jokes” became “He’s witty.”

4. Recognition: But refusal to change. Very few subjects explicitly recognized the
conflict between their perception of the worker and the characteristic of intelligence
that was confronting them. For example, one subject stated, “the traits seem to be
conflicting, most factory workers I have about aren’t too intelligent.

Health related advertisement generally use this concept. Example- a smoker is exposed to
an advertisement stating the harmful effects of cigarette smoking.

Synaesthesia

• Synesthesia or synesthesia is a perceptual phenomenon in which stimulation of one


sensory or cognitive pathway leads to involuntary experiences in a second sensory or
cognitive pathway.

• People who report a lifelong history of such experiences are known as synesthetes.

• People who have synesthesia — may see sounds, taste words or feel a sensation on
their skin when they smell certain scents.

• Many synesthetes experience more than one form of the condition.

• There are over 80 different types of synesthesia described by science.


• Example- every time you bite into a food, you also feel its geometric shape: round,
sharp, or square.

• Maybe when you’re feeling emotional over a person you love, you can close your
eyes and see certain colors playing at your field of vision.

• Synaesthesia has often been conceptualised as an abnormality (e.g. “breakdown of


modularity”)

• However, in the past decade, it is discovered that synaesthesia shares much in


common with ordinary perception; it relies on common perceptual mechanisms and
may merely represent an augmentation of normal propensities for cross-modal
interactions.

• There is a simple common characteristic between synaesthesia and


hallucination: The absence of an ‘appropriate’ stimulus.

• Synaesthesia should be regarded as a pathological condition.

• Differences between developmental synaesthesia and spontaneous


hallucination: Developmental synaesthesia is not transient; it is elicited in a
consistent manner by specific stimuli; it is not disruptive; and it occurs
naturally without neurological disease or the aid of recreational drugs (Van
Campen, 2007).

• Although the correspondence between sensory input and perceptual experience is


different in synaesthetes and non-synaesthetes – in both groups this
correspondence is regular.

Personalities with synaesthetes:

• Marilyn Monroe

• Alessia Cara

• Billie Eilish
• Kanye West

• Vincent Van Gogh had a form of synesthesia called chromesthesia—an experience


of the senses where the person associates sounds with colors.

[Refer Solso. Pg no: 239]

UNIT 4: META-MEMORY, MNEMONICS.

Meta-Memory

Metamemory strategies involve reflecting on our own memory processes with a view to
improving our memory. Such strategies are especially important when we are transferring
new information to long-term memory by rehearsing it. Metamemory strategies are just
one component of metacognition, our ability to think about and control our own processes
of thought and ways of enhancing our thinking.

The use of mnemonic devices and other techniques for aiding memory involves
metamemory (our understanding and reflection upon our memory and how to improve it).
Because most adults spontaneously use categorical clustering, its inclusion in this list of
mnemonic devices is actually just a reminder to use this common memory strategy.

Metamemory refers to the introspective examination of one’s own memory contents


facilitating judgment and discretion. Thus, Metamemory is not memory itself but rather
analysis, commentaries, appraisal of the memory index and learning. For instance, when
Descartes was engaged in his famous doubting meditation – musing about how his
memories or perceptions could have been different than they were, or how he could have
been mistaken about them – he was engaging in metacognition. Such reflection of the
phenomenological selves was also highly prone to subjectivity. Since Metamemory is
primarily judgment and appraisal of the memory index, three basic judgments of formed the
core of Metamemory research Feeling of knowing Judgments, Tip of the Tongue Judgments
and Judgment of Learning. Though the list of judgments is exhaustive as metamemory
concerns any judgment about the memory so other evaluations such as source judgment,
recognition judgment, and confidence judgment also include as imperative part.
Metamemory operates at two levels i.e. the objective level and the metalevel. At the
objective level, the memories itself is the concern whereas at the metalevel, the regulation
of the objective level is involved.

Mnemonics

Mnemonic devices are specific techniques to helps memorize information (Best, 2003).

• In categorical clustering, organize a list of items into a set of categories.

• In interactive images, imagine (as vividly as possible) the objects represented by words
you have to remember as if the objects are interacting with each other in some active way.

• In the pegword system, associate each word with a word on a previously memorized list
and form an interactive image between the two words.

• In the method of loci, visualize walking around an area with distinctive, wellknown
landmarks and link the various landmarks to specific items to be remembered.

• In using acronyms, devise a word or expression in which each of its letters stands for a
certain other word or concept.

• In using acrostics, form a sentence, rather than a single word, to help one remember new
words.

• In using the keyword system, create an interactive image that links the sound and
meaning of a foreign word with the sound and meaning of a familiar word.

The success of mnemonics in facilitating memory is attributed to their assistance in


organizing information.

[Refer BSc notes]

[Refer Matlin, pg no: 171- 175]

[Refer Galloti, pg no: 125]


UNIT 5: ARTIFICIAL INTELLIGENCE, META-COGNITION.

Artificial Intelligence

An artificial intelligence (AI) is the area of computer science that attempts to construct
computers that can demonstrate human-like cognitive processes (Stenning et al., 2006).
Computer programs have been developed both to simulate human intelligence and to
exceed it. In many ways, computer programs have been created with the intention of
solving problems faster and more efficiently than humans. Much of early information-
processing research centered on work based on computer simulations of human intelligence
as well as computer systems that use optimal methods to solve tasks. Programs of both
kinds can be classified as examples of artificial intelligence (AI), or intelligence in symbol-
processing systems such as computers (see Schank & Towle, 2000). Computers cannot
actually think; they must be programmed to behave as though they are thinking. That is,
they must be programmed to simulate cognitive processes. In this way, they give us insight
into the details of how people process information cognitively.

The Turing Test

Probably the first serious attempt to deal with the issue of whether a computer program
can be intelligent was made by Alan Turing (1963). The basic idea behind the Turing Test is
whether an observer can distinguish the performance of a computer from that of a human.
The test is conducted with a computer, a human respondent, and an interrogator. The
interrogator has two different “conversations” with an interactive computer program. The
goal of the interrogator is to figure out which of two parties is a person communicating
through the computer, and which is the computer itself. The interrogator can ask the two
parties any questions at all. However, the computer will try to fool the interrogator into
believing that it is human. The human, in contrast, will be trying to show the interrogator
that he or she truly is human. The computer passes the Turing Test if an interrogator is
unable to distinguish the computer from the human.

Often, what researchers are interested in when assessing the “intelligence” of computers is
not their reaction time, which is often much faster than that of humans. They are interested
instead in patterns of reaction time, that is, whether the problems that take the computer
relatively longer to solve also take human participants relatively longer.

Sometimes, the goal of a computer model is not to match human performance but to
exceed it. In this case, maximum AI, rather than simulation of human intelligence, is the goal
of the program. The criterion of whether computer performance matches that of humans is
no longer relevant. Instead, the criterion of interest is that of how well the computer can
perform the task assigned to it. Computer programs that play chess, for example, typically
play in a way that emphasizes “brute force,” or the consideration of all possible moves
without respect to their quality. The programs evaluate extremely large numbers of possible
moves. Many of them are moves humans would never even consider evaluating (Berliner,
1969; Bernstein, 1958). Using brute force, the IBM program, “Deep Blue,” beat world
champion Gary Kasparov in a 1997 chess match. The same brute-force method is used in
programs that play checkers (Samuel, 1963). These programs generally are evaluated in
terms of how well they can beat each other or, even more importantly, human contenders
playing against them.

Expert Systems

Expert systems are computer programs that can perform the way an expert does in a fairly
specific domain. They are not developed to model human intelligence, but to simulate
performance in just one domain, often a narrow one. They are mostly based on rules that
are followed and worked down like a decision tree.

Several programs were developed to diagnose various kinds of medical disorders, like
cancer. Such programs are obviously of enormous potential significance, given the very high
costs (financial and personal) of incorrect diagnoses. Not only are there expert systems for
use by doctors, but there are even medical expert systems on-line for use by consumers
who would like an analysis of their symptoms. Expert systems are used in other areas as
well, for example in banks. The processing of small mortgages is relatively expensive for
banks because a lot of factors need to be considered. If the data are fed into a computer,
however, an expert system makes a decision about the mortgage application based on rules
it was programmed with. There is one expert system with which you may have made some
experiences yourself: Microsoft Windows offers troubleshooting through the “help section”
where you can enter into a dialogue with the system in order to figure out a solution to your
particular problem.

One has to be cautious in the use of expert systems. Because patients generally do not have
the knowledge their doctors have, their use of expert systems, such as on-line ones, may
lead them to incorrect conclusions about what illnesses they suffer. In medicine, patient use
of the Internet is no substitute for the judgment of a medical doctor.

The application of expertise to problem solving generally involves converging on a single


correct solution from a broad range of possibilities. A complementary asset to expertise in
problem solving involves creativity. Here, an individual extends the range of possibilities to
consider never-before-explored options. In fact, many problems can be solved only by
inventing or discovering strategies to answer a complex question.

The FRUMP Project

Let’s consider a classic example of a computer program designed to perform reading tasks.
One script-based program is called FRUMP, an acronym for Fast Reading Understanding and
Memory Program (De Jong, 1982). The goal of FRUMP is to summarize newspaper stories,
written in ordinary language. When it was developed, FRUMP could interpret about 10% of
news releases issued by United Press International (Butcher & Kintsch, 2003; Kintsch, 1984).
FRUMP usually worked in a top-down fashion by applying world knowledge, based on 48
different scripts.

Consider, for example, the “vehicle accident” script. The script contains information such as
the number of people killed, the number of people injured, and the cause of the accident.
On the basis of the “vehicle accident” script, FRUMP summarized a news article as follows:
“A vehicle accident occurred in Colorado. A plane hit the ground. 1 person died.” FRUMP did
manage to capture the facts of the story. However, it missed the major reason that the item
was newsworthy: Yes, 1 person was killed, but 21 people actually survived!

Research on script-based programs like FRUMP show that humans draw numerous
inferences that artificial intelligence systems cannot access (Kintsch, 1998, 2007). We can be
impressed that FRUMP and other programs can manage some languagelike processes.
However, consistent with Theme 2, their errors highlight the wide-ranging capabilities of
human readers (Thagard, 2005).

More Recent Projects

Cognitive scientists continue to develop programs designed to understand language (Moore


& Wiemer-Hastings, 2003; Shermis & Burstein, 2003; Wolfe et al., 2005). One of the most
useful artificial intelligence programs was created by cognitive psychologist Thomas
Landauer and his colleagues (Foltz, 2003; Landauer et al., 2007). Their program, called latent
semantic analysis (LSA), can perform many fairly sophisticated language tasks. For instance,
it can also be programmed to provide tutoring sessions in disciplines such as physics
(Graesser et al., 2004).

LSA can also assess the amount of semantic similarity between two discourse segments. In
fact, LSA can even be used to grade essays written by college students (Graesser et al.,
2007). For example, suppose a textbook contains the following sentence: “The phonological
loop responds to the phonetic characteristics of speech but does not evaluate speech for
semantic content” (Butcher & Kintsch, 2003, p. 551).

LSA is indeed impressive, but even its developers note that it cannot match a human grader.
For instance, it cannot assess a student’s creativity when writing an essay (Murray, 1998).
Furthermore, all the current programs master just a small component of language
comprehension. For example, LSA typically ignores syntax, whereas humans can easily
detect syntax errors. In addition, LSA learns only from written text, whereas humans learn
from spoken language, facial expressions, and physical gestures (Butcher & Kintsch, 2003).
Once again, the artificial intelligence approach to language illustrates humans’ tremendous
breadth of knowledge, cognitive flexibility, understanding of syntax, and sources of
information.

[Also refer CP notes]

META-COGNITION

[Refer Galloti, pg no: 343- 345] [Refer CP notes and BSc notes]

You might also like