Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views34 pages

Chapter 11 - Language

This document discusses the complexities of language, including its definition, the creativity inherent in human language, and the universal need for communication. It explores how language is structured hierarchically and governed by rules, enabling unique sentence creation, and highlights the cognitive processes involved in understanding and producing language. The text also touches on the historical context of language study and the role of psycholinguistics in examining language comprehension and representation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views34 pages

Chapter 11 - Language

This document discusses the complexities of language, including its definition, the creativity inherent in human language, and the universal need for communication. It explores how language is structured hierarchically and governed by rules, enabling unique sentence creation, and highlights the cognitive processes involved in understanding and producing language. The text also touches on the historical context of language study and the role of psycholinguistics in examining language comprehension and representation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Language

11
What is Language? Understanding Text and Stories
The Creativity of Human Language Making Inferences
The Universal Need to Communicate with Language Situation Models
Studying Language
Having Conversations
Understanding Words: A few Complications The Given–New Contract
Not All Words Are Created Equal: Differences in Frequency Common Ground: Taking the Other Person into Account
The Pronunciation of Words Is Variable Establishing Common Ground
There Are No Silences Between Words in Normal Conversation Syntactic Coordination
➤ Method: Syntactic Priming
Understanding Ambiguous Words
Accessing Multiple Meanings Something to Consider: Music and Language
➤ Method: Lexical Priming Music and Language: Similarities and Differences
Frequency Influences Which Meanings Are Activated Expectation in Music and Language
➤ TEST YOURSELF 11.1 Do Music and Language Overlap in the Brain?
➤ TEST YOURSELF 11.3
Understanding Sentences
Parsing: Making Sense of Sentences
The Garden Path Model of Parsing
The Constraint-Based Approach to Parsing
Influence of Word Meaning
Influence of Story Context
CHAPTER SUMMARY
Influence of Scene Context
Influence of Memory Load and Prior Experience THINK ABOUT IT
with Language KEY TERMS
Prediction, Prediction, Prediction…
COGLAB EXPERIMENTS
➤ TEST YOURSELF 11.2

321

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 321 4/18/18 5:13 PM


SOME QUESTIONS
WE WILL CONSIDER T his chapter tells a story that begins with how we perceive and understand words; then
considers how strings of words create meaningful sentences; and ends by considering
how we use language to communicate in text, stories, and conversation.
◗ How do we understand
individual words, and how are Throughout this story, we will encounter the recurring themes of how readers and lis-
words combined to create teners use inference and prediction to create meaning. This chapter therefore follows in
sentences? (325) the footsteps of previous chapters that have discussed the role of inference and prediction
◗ How can we understand in cognition. For example, in Chapter 3, Perception, we described Helmholtz’s theory of
sentences that have more
unconscious inference, which proposed that to deal with the ambiguity of the visual stimu-
than one meaning? (331)
lus (see page 65), we unconsciously infer which of a number of possible alternatives is most
◗ How do we understand stories?
(337) likely to be what is “out there” in the environment (page 70).
◗ What are the connections
between language and
music? (347) We have also seen how, as we scan a scene by making a series of eye movements, these eye
movements are partially guided by our knowledge of where important objects are likely to be
in the scene (page 105). And in Chapter 6, Long-Term Memory, we saw how memories for
past experiences are used to predict what might be likely to occur in the future (page 177).
You may wonder how inference and prediction are involved in language. You will see
that some things you may think are simple, like understanding the words in a conversation,
actually pose challenges that must be solved by bringing to bear knowledge from your past
experience with language. And then there are those constructions called sentences, which
are created by strings of words, one after the other. While you might think that understand-
ing a sentence is just a matter of adding up the meanings of the words, you will see that the
meanings of the words are just the beginning, because the order of the words also matters,
some of the words may have multiple meanings, and two sentences that are identical can
have different meanings. Just as every other type of cognition you’ve encountered so far has
turned out to be more complicated than you may have thought it would be, the same thing
holds for language. You routinely use inference and prediction to understand language, just
as you are doing right now, as you are reading this, probably without even realizing it.

What is Language?
The following definition of language captures the idea that the ability to string sounds
and words together opens the door to a world of communication: Language is a system of
communication using sounds or symbols that enables us to express our feelings, thoughts, ideas,
and experiences.
But this definition doesn’t go far enough, because it conceivably could include some
forms of animal communication. Cats “meow” when their food dish is empty; monkeys
have a repertoire of “calls” that stand for things such as “danger” or “greeting”; bees perform
a “waggle dance” at the hive that indicates the location of flowers. But as impressive as some
animal communication is, it is much more rigid than human language. Animals use a limited
number of sounds or gestures to communicate about a limited number of things that are
important for survival. In contrast, humans use a wide variety of signals, which can be com-
bined in countless ways. One of the properties of human language is, therefore, creativity.

The Creativity of Human Language


Human language provides a way of arranging a sequence of signals—sounds for spoken lan-
guage, letters and written words for written language, and physical signs for sign language—
322 to transmit, from one person to another, things ranging from the simple and commonplace

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 322 4/18/18 5:13 PM


What is Language? 323

(“My car is over there”) to messages that have perhaps never been previously written or
uttered in the entire history of the world (“My trip with Zelda, my cousin from California
who lost her job in February, was on Groundhog Day”).
Language makes it possible to create new and unique sentences because it has a structure
that is both hierarchical and governed by rules. The hierarchical nature of language means
that it consists of a series of small components that can be combined to form larger units.
For example, words can be combined to create phrases, which in turn can create sentences,
which themselves can become components of a story. The rule-based nature of language
means that these components can be arranged in certain ways (“What is my cat saying?” is
permissible in English), but not in other ways (“Cat my saying is what?” is not). These two
properties—a hierarchical structure and rules—endow humans with the ability to go far
beyond the fixed calls and signs of animals to communicate whatever we want to express.

The Universal Need to Communicate with Language


Although people do “talk” to themselves, as when Hamlet wondered, “To be or not to be,”
or when you daydream in class, language is primarily used for communication, whether
it be conversing with another person or reading what someone has written. This need to
communicate using language has been called “universal” because it occurs wherever there
are people. For example, consider the following:
➤ People’s need to communicate is so powerful that when deaf children find
themselves in an environment where nobody speaks or uses sign language, they
invent a sign language themselves (Goldin-Meadow, 1982).
➤ All humans with normal capacities develop a language and learn to follow its
complex rules, even though they are usually not aware of these rules. Although
many people find the study of grammar to be very difficult, they have no trouble
using language.
➤ Language is universal across cultures. There are more than 5,000 different
languages, and there isn’t a single culture without language. When European
explorers first set foot in New Guinea in the 1500s, the people they discovered,
who had been isolated from the rest of the world for eons, had developed more
than 750 languages, many of them quite different from one another.
➤ Language development is similar across cultures. No matter what the culture or
the particular language, children generally begin babbling at about 7 months, a few
meaningful words appear by their first birthday, and the first multiword utterances
occur at about age 2 (Levelt, 2001).
➤ Even though a large number of languages are very different from one another, we
can describe them as being “unique but the same.” They are unique in that they
use different words and sounds, and they may use different rules for combining
these words (although many languages use similar rules). They are the same in
that all languages have words that serve the functions of nouns and verbs, and all
languages include a system to make things negative, to ask questions, and to refer
to the past and present.

Studying Language
Language has fascinated thinkers for thousands of years, dating back to the ancient Greek
philosophers Socrates, Plato, and Aristotle (350–450 BCE), and before. The modern sci-
entific study of language traces its beginnings to the work of Paul Broca (1861) and Carl
Wernicke (1874). Broca’s study of patients with brain damage led to the proposal that an
area in the frontal lobe (Broca’s area) is responsible for the production of language. Wernicke

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 323 4/18/18 5:13 PM


324 CHAPTER 11 Language

proposed that an area in the temporal lobe (Wernicke’s area) is responsible for comprehen-
sion. We described Broca’s and Wernicke’s observations in Chapter 2 (see page 39), and also
noted that modern research has shown that the situation is quite a bit more complicated
than just two language areas in the brain (see page 44).
In this chapter, we will focus not on the connection between language and the brain,
but on behavioral research on the cognitive mechanisms of language. We take up the story
of behavioral research on language in the 1950s when behaviorism was still the dominant
approach in psychology (see page 10). In 1957, B. F. Skinner, the main proponent of be-
haviorism, published a book called Verbal Behavior, in which he proposed that language is
learned through reinforcement. According to this idea, just as children learn appropriate
behavior by being rewarded for “good” behavior and punished for “bad” behavior, chil-
dren learn language by being rewarded for using correct language and punished (or not
rewarded) for using incorrect language.
In the same year, linguist Noam Chomsky (1957) published a book titled Syntactic
Structures, in which he proposed that human language is coded in the genes. According to
this idea, just as humans are genetically programmed to walk, they are also programmed to ac-
quire and use language. Chomsky concluded that despite the wide variations that exist across
languages, the underlying basis of all language is similar. Most important for our purposes,
Chomsky saw studying language as a way to study the properties of the mind and therefore
disagreed with the behaviorist idea that the mind is not a valid topic of study for psychology.
Chomsky’s disagreement with behaviorism led him to publish a scathing review of Skin-
ner’s Verbal Behavior in 1959. In his review, he presented arguments against the behaviorist
idea that language can be explained in terms of reinforcements and without reference to the
mind. One of Chomsky’s most persuasive arguments was that as children learn language, they
produce sentences that they have never heard and that have never been reinforced. (A classic
example of a sentence that has been created by many children and that is unlikely to have been
taught or reinforced by parents is “I hate you, Mommy.”) Chomsky’s criticism of behaviorism
was an important event in the cognitive revolution and began changing the focus of the young
discipline of psycholinguistics, the field concerned with the psychological study of language.
The goal of psycholinguistics is to discover the psychological processes by which hu-
mans acquire and process language (Clark & Van der Wege, 2002; Gleason & Ratner, 1998;
Miller, 1965). The four major concerns of psycholinguistics are as follows:
1. Comprehension. How do people understand spoken and written language? This
includes how people process language sounds; how they understand words, sen-
tences, and stories expressed in writing, speech, or sign language; and how people
have conversations with one another.
2. Representation. How is language represented in the mind? This includes how peo-
ple group words together into phrases to create meaningful sentences and how they
make connections between different parts of a story.
3. Speech production. How do people produce language? This includes the physical
processes of speech production and the mental processes that occur as a person
creates speech.
4. Acquisition. How do people learn language? This includes not only how children
learn language but also how people learn additional languages, either as children or
later in life.

Because of the vast scope of psycholinguistics, we are going to restrict our attention to
the first two of these concerns, describing research on comprehension and representation,
which together explain how we understand language. The plan is to start with words, then
look at how words are combined to create sentences, then how sentences create “stories”
that we read, hear, or create ourselves as we have conversations with other people.

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 324 4/18/18 5:13 PM


Understanding Words : A Few Complications 325

Understanding Words: A Few Complications


We begin our discussion of words by defining a few terms. Our lexicon is all of the words
we know, which has also been called our “mental dictionary.” Semantics is the meaning of
language. This is important for words, because each word has one or more meanings. The
meaning of words is called lexical semantics. Our goal in this section is to consider how we
determine the meanings of words. You might think that determining a word’s meaning is
simple: We just look it up in our lexicon. But determining word meaning is more compli-
cated than a single “look up.” We will now consider a number of factors that pose challenges
to perceiving and understanding words.

Not All Words Are Created Equal: Differences in Frequency


Some words occur more frequently than others in a particular language. For example, in
English, home occurs 547 times per million words, and hike occurs only 4 times per million
words. The frequency with which a word appears in a language is called word frequency,
and the word frequency effect refers to the fact that we respond more rapidly to high-fre-
quency words like home than to low-frequency words like hike. The reason this is important
is because a word’s frequency influences how we process the word.
One way to illustrate processing differences between high- and low-frequency words is
to use a lexical decision task in which the task is to decide as quickly as possible whether
strings of letters are words or nonwords. Try this for the following four words: reverie, cratily,
history, garvola. Note that there were two real words, reverie, which is a low-frequency
word, and history, which is a high-frequency word. Research using the lexical decision
task has demonstrated slower responding to low-frequency words (Carrol, 2004; also see
Chapter 9, page 279 for a description of another way the lexical decision task has been used).
The slower response for low-frequency words has also been demonstrated by
measuring people’s eye movements while reading. Keith Rayner and Susan Duffy
(1986) measured participants’ eye movements and the durations of the fixations that
Low frequency
occur as the eye pauses at a particular place (see Chapter 4, page 103) while they
read sentences that contained either a high-frequency or a low-frequency target High frequency
word, where frequency refers to how often a word occurs in normal language us- 400

age. The average frequencies were 5.1 times per million for the low-frequency words
and 122.3 times per million for the high-frequency words. For example, the low-
Fixation duration (msec)

300
frequency target word in the sentence “The slow waltz captured their attention” is
waltz, and replacing waltz with the high-frequency word music creates the sentence
“The slow music captured their attention.” The duration of the first fixation on the 200
words, shown in Figure 11.1a, was 37 msec longer for low-frequency words com-
pared to high-frequency words. (Sometimes a word might be fixated more than once,
as when the person reads a word and then looks back at it in response to what the per- 100
son has read later in the sentence.) Figure 11.1b shows that the total gaze duration—
the sum of all fixations made on a word, was 87 msec longer for low-frequency
words than for high-frequency words. One reason for these longer fixations on low- 0
(a) First fixation (b) Gaze
frequency words could be that the readers needed more time to access the meaning
of the low-frequency words. The word frequency effect, therefore, demonstrates how
➤ Figure 11.1 Fixation durations on low-
our past experience with words influences our ability to access their meaning.
frequency and high-frequency words in
sentences measured by Rayner and Duffy
The Pronunciation of Words Is Variable (1986). (a) First fixation durations; (b) Total
gaze duration. In both cases, fixation times
Another problem that makes understanding words challenging is that not every- are longer for low-frequency words.
one pronounces words in the same way. People talk with different accents and at (Source: Based on data from Rayner and Duffy, 1986,
different speeds, and, most important, people often take a relaxed approach to Table 2, p. 195)

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 325 4/18/18 5:13 PM


326 CHAPTER 11 Language

pronouncing words when they are speaking naturally. For example, if you were talking to a
friend, how would you say “Did you go to class today?” Would you say “Did you” or “Di-
joo”? You have your own ways of producing various words and phonemes, and other people
have theirs. For example, analysis of how people actually speak has determined that there
are 50 different ways to pronounce the word the (Waldrop, 1988).
So how do we deal with this? One way is to use the context within which the word
appears. The fact that context helps is illustrated by what happens when you hear a word
taken out of context. Irwin Pollack and J. M. Pickett (1964) showed that words are more
difficult to understand when taken out of context and presented alone, by recording the
conversations of participants who sat in a room waiting for the experiment to begin. When
the participants were then presented with recordings of single words taken out of their own
conversations, they could identify only half the words, even though they were listening to
their own voices! The fact that the people in this experiment were able to identify words as
they were talking to each other, but couldn’t identify the same words when the words were
isolated, illustrates that their ability to perceive words in conversations is aided by the con-
text provided by the words and sentences that make up the conversation.

There Are No Silences Between Words in Normal Conversation


The fact that the sounds of speech are easier to understand when we hear them spoken
in a sentence is particularly amazing when we consider that unlike the words you are now
reading that are separated by spaces, words spoken in a sentence are usually not separated
by silence. This is not what we might expect, because when we listen to someone speak we
usually hear the individual words, and sometimes it may seem as if there are silences that
separate one word from another. However, remember our discussion in Chapter 3 (page
68) in which we noted that a record of the physical energy produced by conversational
speech reveals that there are often no physical breaks between words in the speech signal or
that breaks can occur in the middle of words (see Figure 3.12).
In Chapter 3 we described an experiment by Jennifer Saffran and coworkers (2008),
which showed that infants are sensitive to statistical regularities in the speech signal—the
way that different sounds follow one another in a particular language and how knowing
these regularities helps infants achieve speech segmentation—the perception of individual
words even though there are often no pauses between words. (see page 68).
We use the statistical properties of language all the time without realizing it. For
example, we have learned that certain sounds are more likely to follow one another within
a word, and some sounds are more likely to follow each other in different words. Consider
the words pretty baby. In English, it is likely that pre and ty will follow each other in the same
word (pre-ty) and that ty and ba will be separated in two different words (pretty baby).
Another thing that aids speech segmentation is our knowledge of the meanings of
words. In Chapter 3 we pointed out that when we listen to an unfamiliar foreign language, it
is often difficult to distinguish one word from the next, but if we know a language, individ-
ual words stand out (see page 68). This observation illustrates that knowing the meanings
of words helps us perceive them. Perhaps you have had the experience of hearing individual
words that you happen to know in a foreign language seem to “pop out” from what appears
to be an otherwise continuous stream of speech.
Another example of how meaning is responsible for organizing sounds into words is
provided by these two sentences:
Jamie’s mother said, “Be a big girl and eat your vegetables.”
The thing Big Earl loved most in the world was his car.
“Big girl” and “Big Earl” are both pronounced the same way, so hearing them differently
depends on the overall meaning of the sentence in which these words appear. This example

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 326 4/18/18 5:13 PM


Understanding Ambiguous Words 327

is similar to the familiar “I scream, you scream, we all scream for ice cream” that many
people learn as children. The sound stimuli for “I scream” and “ice cream” are identical, so
the different organizations must be achieved by the meaning of the sentence in which these
words appear.
So our ability to hear and understand spoken words is affected by (1) how frequently
we have encountered a word in the past; (2) the context in which the words appear; (3) our
knowledge of statistical regularities of our language; and (4) our knowledge of word mean-
ings. There’s an important message here—all of these things involve knowledge achieved by
learning/experience with language. Sound familiar? Yes, this continues the theme of the im-
portance of knowledge that occurs throughout this chapter as we consider how we under-
stand sentences, stories, and conversations. But we aren’t through with words yet, because
just to make things more interesting, many words have multiple meanings.

Understanding Ambiguous Words


Words can often have more than one meaning, a situation called lexical ambiguity. For
example, the word bug can refer to an insect, a hidden listening device, or to annoying some-
one, among other things. When ambiguous words appear in a sentence, we usually use the
context of the sentence to determine which definition applies. For example, if Susan says,
“My mother is bugging me,” we can be pretty sure that bugging refers to the fact that Susan’s
mother is annoying her, as opposed to sprinkling insects on her or installing a hidden lis-
tening device in her room (although we might need further context to totally rule out this
last possibility).

Accessing Multiple Meanings


The examples for bug indicate that context often clears up ambiguity so rapidly that we are
not aware of its existence. But research has shown that something interesting happens in the
mind right after a word is heard. Michael Tanenhaus and coworkers (1979) showed that
people briefly access multiple meanings of ambiguous words before the effect of context
takes over. They did this by presenting participants with a tape recording of short sentences
such as She held the rose, in which the target word rose is a noun referring to a flower, or They
all rose, in which rose is a verb referring to people standing up.
Tanenhaus and coworkers wanted to determine what meanings of rose occurred
in a person’s mind for each of these sentences. To do this, they used a procedure called
lexical priming.

METHOD Lexical Priming


Remember from Chapter 6 (page 182) that priming occurs when seeing a stimulus
makes it easier to respond to that stimulus when it is presented again. This is called
repetition priming, because priming occurs when the same word is repeated. The
basic principle behind priming is that the first presentation of a stimulus activates a
representation of the stimulus, and a person can respond more rapidly if this activation
is still present when the stimulus is presented again.
Lexical priming is priming that involves the meaning of words. Lexical priming
occurs when a word is followed by another word with a similar meaning. For example,
presenting the word rose and then the word flower can cause a person to respond faster
to the word flower because the meanings of rose and flower are related. This priming
effect does not, however, occur if the word cloud is presented before flower because
their meanings are not related. The presence of a lexical priming effect therefore indi-
cates whether two words, like rose and flower, have similar meanings in a person’s mind.

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 327 4/18/18 5:13 PM


328 CHAPTER 11 Language

Tanenhaus and coworkers measured lexical priming using


Condition 1: She held a rose (noun) Probe: Flower (noun)
two conditions: (1) The noun-noun condition: a word is pre-
Condition 2: They all rose (verb) Probe: Flower (noun)
sented as a noun followed by a noun probe stimulus; and (2)
40 The verb-noun condition: a word is presented as a verb fol-
lowed by a noun probe stimulus. For example, in Condition 1,
30
Priming effect (msec)

participants would hear a sentence like She held a rose, in which


rose is a noun (a type of flower), followed immediately by the
20
probe word flower. Their task was to read the probe word as
10
quickly as possible. The time that elapsed between the end of
the sentence and when the participant began saying the word is
0 the reaction time.
To determine if presenting the word rose caused faster respond-
–10 ing to flower, a control condition was run in which a sentence like
Condition 1 Condition 2 Condition 1 Condition 2 She held a post was followed by the same probe word, flower. Because
(a) 0 delay (b) 200 msec delay
the meaning of post is not related to the meaning of flower, priming
would not be expected, and this is what happened. As shown in
➤ Figure 11.2 (a) Priming effect (decrease in response time the left bar in Figure 11.2a, the word rose, used as a flower, resulted
compared to the control condition) at zero delay between in a 37 msec faster response to the word flower than in the control
word and probe. Condition 1: Noun (Example: She held a rose)
condition. This is what we would expect, because rose, the flower, is
followed by noun probe (flower). Condition 2: Verb (They all
rose) followed by noun probe (flower). (b) Priming effect for related to the meaning of the word flower.
the same conditions at 200 msec delay. Tanenhaus’s results become more significant when we con-
(Source: Based on data from Tanenhaus et al., 1979). sider Condition 2, when the sentence was They all rose, in which
rose is a verb (people getting up) and the probe word was still
flower. The control for this sentence was They all touched. The result, shown in the right bar
in Figure 11.3a, shows that priming occurred in this condition as well. Even though rose
was presented as a verb, it still caused a faster response to flower!
What this means is that the “flower” meaning of rose is activated immediately after hear-
ing rose, whether it is used as a noun or a verb. Tanenhaus also showed that the verb meaning
of rose is activated whether it is used as a noun or a verb, and concluded from these results that
all of an ambiguous word’s meanings are activated immediately after the word is heard.
To make things even more interesting, when Tanenhaus ran the same experiment but
added a delay of 200 msec between the end of the sentences and the probe word, the result
changed. As shown in Figure 11.2b, priming still occurs for Condition 1—rose the noun
primes flower—but no longer occurs for Condition 2—rose the verb does not prime flower.
What this means is that by 200 msec after hearing the word rose as a verb, the flower mean-
ing of rose is gone. Thus, the context provided by a sentence helps determine the meaning of
a word, but context exerts its influence after a slight delay during which other meanings of
a word are briefly accessed (also see Swinney, 1979, for a similar result and Lucas, 1999, for
more on how context affects the meaning of words.)

Frequency Influences Which Meanings Are Activated


While context helps determine the appropriate meaning of words in a sentence, there’s an-
other factor at work: how frequently different meanings occur, with meanings that occur
more frequently being more likely. As Matthew Traxler (2012) puts it, “Many words have
multiple meanings, but these meanings are not all created equal.” For example, consider the
word tin. The most frequent meaning of tin is a type of metal, while a less-frequent meaning
is a small metal container. The relative frequency of the meanings of ambiguous words is
described in terms of meaning dominance. Words such as tin, in which one meaning (a
type of metal) occurs more often than the other (a small metal container), is an example
of biased dominance. Words such as cast, in which one meaning (members of a play) and
the other meaning (plaster cast) are equally likely, is an example of balanced dominance.

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 328 4/18/18 5:13 PM


Understanding Ambiguous Words 329

➤ Figure 11.3 Accessing the


Word with balanced dominance: CAST (play); CAST (plaster) meaning of ambiguous words
Word with biased dominance: TIN (metal); tin (food container) while reading a sentence is
determined by the word’s
No prior context: Speed determined by dominance
dominance and the context
CAST created by the sentence. If
TIN
CAST there is no prior context:
(a) competition between equally
SLOW FAST
likely meanings of a word with
[1] The cast worked.. [2] The tin was..
. . balanced dominance results
in slow access; (b) activation
of only the most frequent
meaning of a word with biased
dominance results in fast access.
If there is context before a
word with biased dominance:
(a) CAST (play) and CAST (plaster) (b) TIN (metal) is dominant (c) activation of both the less
are equally dominant frequent and most frequent
meanings results in slow access;
Prior context: Speed determined by dominance and context (d) activation of only the most
frequent meaning results in fast
tin
TIN
TIN access. See text for examples.

[3] ...beans in a tin SLOW [4] ...mountain to FAST


look
for tin

(c) tin (food container) is not dominant; (d) TIN (metal) is dominant
TIN (metal) is dominant

This difference between biased and balanced dominance influences the way people
access the meanings of words as they read them. This has been demonstrated in experiments
in which researchers measure eye movements as participants read sentences and note the
fixation time for an ambiguous word and also for a control word with just one meaning that
replaces the ambiguous word in the sentence. Consider the following sentence, in which the
ambiguous word cast has balanced dominance.
The cast worked into the night. (control word: cook)
As a person reads the word cast, both meanings of cast are activated, because cast
(member of a play) and cast (plaster cast) are equally likely. Because the two meanings
compete for activation, the person looks longer at cast than at the control word cook,
which has only one meaning as a noun. Eventually, when the reader reaches the end of the
sentence, the meaning becomes clear (Duffy et al., 1988; Rayner & Frazier, 1989; Traxler,
2012) (Figure 11.3a).
But consider the following, with the ambiguous word tin:
The tin was bright and shiny. (control word: gold)
In this case, people read the biased ambiguous word tin just as quickly as the control word,
because only the dominant meaning of tin is activated, and the meaning of tin as a metal is
accessed quickly (Figure 11.3b).

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 329 4/18/18 5:13 PM


330 CHAPTER 11 Language

But meaning frequency isn’t the only factor that determines the accessibility of the
meaning of a word. Context can play a role as well. Consider, for example, the following
sentence, in which the context added before the ambiguous word tin indicates the less-fre-
quent meaning of tin:
The miners went to the store and saw that they had beans in a tin. (control word: cup)
In this case, when the person reaches the word tin, the less frequent meaning is activated
at increased strength because of the prior context, and the more frequent meaning of tin is
activated as well. Thus, in this example, as with the first sentence that contained the word
cast, two meanings are activated, so the person looks longer at tin (Figure 11.3c).
Finally, consider the sentence below, in which the context indicates the more frequent
meaning of tin:
The miners went under the mountain to look for tin. (control word: gold)
In this example, only the dominant meaning of tin is activated, so tin is read rapidly
(Figure 11.3d).
We’ve seen in this chapter that the process of accessing the meaning of a word is com-
plicated and is influenced by multiple factors. First, the frequency of a word determines
how long it takes to process its meaning. Second, if a word has more than one meaning,
the context of the sentence influences which meaning we access. Finally, our ability to
access the correct meaning of a word depends on both the word’s frequency and, for words
with more than one meaning, a combination of meaning dominance and context. So sim-
ply identifying, recognizing, and knowing the meaning of individual words is a complex
and impressive feat. However, except in rare situations in which words operate alone—as
in exclamations such as Stop! or Wait!—words are used with other words to form sen-
tences, and, as we will see next, sentences add another level of complexity to understand-
ing language.

T E ST YOUR SELF 11.1


1. What is the hierarchical nature of language? The rule-based nature of language?
2. Why has the need to communicate been called universal?
3. What events are associated with the beginning of the modern study of
language in the 1950s?
4. What is psycholinguistics? What are its concerns, and what part of
psycholinguistics does this chapter focus on?
5. What is semantics? The lexicon?
6. How does word frequency affect our processing of words? Describe the eye
movement experiment that illustrates an effect of word frequency.
7. What is the evidence that context helps people deal with the variability of word
pronunciation?
8. What is speech segmentation and why is it a problem? What are some of the
factors that help us achieve speech segmentation?
9. What is lexical ambiguity? Describe the experiment that used lexical priming to
show that (a) all of the multiple meanings of a word are accessed immediately
after the word is heard; and (b) context determines the appropriate meaning of
an ambiguous word within about 200 msec.
10. What is meaning dominance? Biased dominance? Balanced dominance?
11. How do frequency and context combine to determine the correct meaning of
ambiguous words?

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 330 4/18/18 5:13 PM


Understanding Sentences 331

Understanding Sentences
When we considered words, we saw how sentences create context, which makes it possible
to (1) deal with the variability of word pronunciations, (2) perceive individual words in
a continuous stream of speech, and (3) determine the meanings of ambiguous words.But
now we are going to go beyond just considering how sentences help us understand words,
by asking how combining words into sentences creates meaning.
To understand how we determine the meaning of a sentence, we need to consider
syntax—the structure of a sentence—and the study of syntax involves discovering cues that
languages provide that show how words in a sentence relate to one another (Traxler, 2012).
To start, let’s think about what happens as we hear a sentence. Speech unfolds over time,
with one word following another. This sequential process is central to understanding sen-
tences, because one way to think about sentences is meaning unfolding over time.
What mental processes are occurring as a person hears a sentence? A simple way to
answer this question would be to picture the meaning as being created by adding up the
meanings of each word as they occur. But this idea runs into trouble right away when we
consider that some words have more than one meaning and also that a sequence of words
can have more than one meaning. The key to determining how strings of words create
meaning is to consider how meaning is created by the grouping of words into phrases—a
process called parsing.

Parsing: Making Sense of Sentences


Understanding the meaning of a sentence is a feat of mental py- [After the musician played the piano]
rotechnics that involves understanding each word as it occurs [she left the stage]
(some of which may be ambiguous) and parsing words into After the musician
phrases (Figure 11.4). To introduce parsing, let’s look at some played the piano Parsed sentence
she left the stage
sentences. Consider, for example, a sentence that begins: in mind
Words in
After the musician played the piano …
What do you think comes next? Some possibilities are:
a. . . . she left the stage.
➤ Figure 11.4 Parsing is the process that occurs when a person
b. . . . she bowed to the audience. hears or reads a string of words (Words in) and groups these
c. . . . the crowd cheered wildly. words into phrases in their mind (Parsed sentence in mind). The
All of these possibilities, which create sentences that are easy way the words are grouped in this example indicates that the
person has interpreted the sentence to mean that the musician
to understand and that make sense, involving grouping the words
played the piano and then left the stage.
as follows: [After the musician played the piano] [the crowd
cheered wildly]. But what if the sentence continued by stating
d. . . . was wheeled off of the stage.
Reading the sentence ending in (d) as a whole, After the musician played the piano
was wheeled off of the stage, might take you by surprise because the grouping of [After
the musician played the piano] isn’t correct. The correct grouping is [After the musician
played] [the piano was wheeled off of the stage.]. When written, adding a comma makes
the correct parsing of this sentence clear: After the musician played, the piano was wheeled
off the stage.
Sentences like this one, which begin appearing to mean one thing but then end up
meaning something else, are called garden path sentences (from the phrase “leading a per-
son down the garden path,” which means misleading the person). Garden path sentences
illustrate temporary ambiguity, because first one organization is adopted and then—when
the error is realized—the person shifts to the correct organization.

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 331 4/18/18 5:13 PM


332 CHAPTER 11 Language

The Garden Path Model of Parsing


Language researchers have used sentences with temporary ambiguity to help understand
the mechanisms that operate during parsing. One of the early proposals to explain parsing,
and garden path sentences in particular, is called the garden path model of parsing. This
approach, proposed by Lynn Frazier (1979, 1987), states that as people read a sentence,
their grouping of words into phrases is governed by a number of processing mechanisms
called heuristics. As we will see when we discuss reasoning and decision making, a heuristic
is a rule that can be applied rapidly to make a decision. The decisions involved in parsing are
decisions about the structure of a sentence as it unfolds in time.
Heuristics have two properties: On the positive side, they are fast, which is important
for language, which occurs at about 200 words per minute (Traxler, 2012). On the negative
side, they sometimes result in the wrong decision. These properties become apparent in a
sentence like After the musician played the piano was wheeled off the stage, in which the initial
parse of the sentence turns out to be incorrect. The garden path model proposes that when
this happens, we reconsider the initial parse and make appropriate corrections.
The garden path model specifies not only that rules are involved in parsing, but that
these rules are based on syntax—the structural characteristic of language. We will focus
on one of these syntax-based principles, which is called late closure. The principle of late
closure states that when a person encounters a new word, the person’s parsing mechanism
assumes that this word is part of the current phrase, so each new word is added to the cur-
rent phrase for as long as possible (Frazier, 1987).
Let’s return to sentence about the musician to see how this works. The person begins
reading the sentence:
After the musician played …
So far, all the words are in the same phrase. But what happens when we reach the words
the piano? According to late closure, the parsing mechanism assumes that the piano is part
of the current phrase, so the phrase now becomes
After the musician played the piano …
So far, so good. But when we reach was, late closure adds this to the phrase to create
After the musician played the piano was …
And then, when wheeled is added to create an even longer phrase, it becomes obvious that
something is wrong. Late closure has led us astray (down the garden path!) by adding too
many words to the first phrase. We need to reconsider, taking the meaning of the sentence
into account, and reparse the sentence so “the piano” is not added to the first phrase. Instead,
it becomes part of the second phrase to create the grouping
[After the musician played] [the piano was wheeled off the stage].
The garden path model generated a great deal of research, which resulted in support
for the model (Frazier, 1987). However, some researchers questioned the proposal that syn-
tactic rules like late closure operate alone to determine parsing until it becomes obvious
that a correction is needed (Altmann et al., 1992; Tanenhaus & Trueswell, 1995). These
researchers have provided evidence to show that factors in addition to syntax can influence
parsing right from the beginning.

The Constraint-Based Approach to Parsing


The idea that information in addition to syntax participates in processing as a person reads
or hears a sentence is called the constraint-based approach to parsing. As we consider
some examples that show how parsing can be influenced by factors in addition to syntax,

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 332 4/18/18 5:13 PM


Understanding Sentences 333

we will encounter a theme we introduced at the beginning of the chap-


Defendent examining Defendent being
ter: Information contained in the words of a sentence, and in the con- something examined
text within which a sentence occurs, is used to make predictions about
how the sentence should be parsed (Kuperberg & Jaeger, 2015).

Influence of Word Meaning Here are two sentences that illustrate


how the meaning of words in a sentence can influence parsing right
from the beginning. They differ in how hard they are to figure out be-
Read: “The defendent
cause of the meanings of the second words in each sentence. examined”
1. The defendant examined by the lawyer was unclear. (a)
2. The evidence examined by the lawyer was unclear.
Which one was easier to figure out as you were reading along? The process Evidence being examined
that occurs as sentence (1) is unfolding is illustrated in Figure 11.5a. After
reading The defendant examined, two possibilities present themselves:
(1) the defendant could be examining something or (2) the defendant
could be being examined by someone else. It’s only after reading the rest
of the sentence by the attorney that it is possible to definitely determine
that the defendant is being examined.
Read: “The evidence
In contrast, only one possibility presents itself after reading The ev- examined”
idence examined in sentence (2) because it is unlikely that the evidence
(b)
will be doing any examining. (Figure 11.5b).
Here are two more examples:
➤ Figure 11.5 (a) Two possible predictions that could be
1. The dog buried in the sand was hidden. made after reading or hearing The defendant examined in
2. The treasure buried in the sand was hidden. sentence (1) The defendant is going to either examine
something (top) or be examined by someone else
Which one of these is more likely to initially lead to the wrong conclu- (bottom). (b) The only possible reading of The evidence
sion, and why? examined in sentence (2) is that the evidence is examined
by someone. The possibility that the evidence was going
to examine something is highly unlikely.
Influence of Story Context Consider the following sentence pro-
posed by Thomas Bever (1970), which has been called the most famous
garden path sentence because of the confusion it causes:
The horse raced past the barn fell
Whoa! What’s going on here? For many people, everything is fine until they hit fell.
Readers are often confused, and may even accuse the sentence of being ungrammatical. But
let’s look at the sentence in the context of the following story:
There were two jockeys who decided to race their horses. One raced his horse along
the path that went past the garden. The other raced his horse along the path that went
past the barn. The horse raced past the barn fell.
Of course, the confusion could have been avoided by simply stating that the horse that was
raced past the barn fell, but even without these helpful words, context wins the day and we
parse the sentence correctly!

Influence of Scene Context Parsing of a sentence is influenced not only by the con-
text provided by stories but also by context provided by scenes. To investigate how ob-
serving objects in a scene can influence how we interpret a sentence, Michael Tanenhaus
and coworkers (1995) developed a technique called the visual world paradigm, which
involves determining how information in a scene can influence how a sentence is pro-
cessed. Participants’ eye movements were measured as they saw objects on a table, as in

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 333 4/18/18 5:14 PM


334 CHAPTER 11 Language

Ambiguous
Unambiguous
1.0
on the towel (2)

movements to incorrect destination


2

Proportion of trials with eye


4 3 0.8

0.6
in the box (3, 4)
1
Put the apple (1)
0.4

0.2

Bruce Goldstein
0
(a) One-apple condition (b) Eye movements (c)

➤ Figure 11.6 (a) One-apple scene similar to the one viewed by Tanenhaus et al.’s (1995)
participants. (b) Eye movements made while comprehending the task. (c) Proportion
of trials in which eye movements were made to the towel on the right for the ambiguous
sentence. (Place the apple on the towel in the box) and the unambiguous sentence (Place
the apple that’s on the towel in the box).

Figure 11.6a. As participants looked at this display, they were told to carry out the fol-
lowing instructions:
Place the apple on the towel in the box.
When participants heard the phrase Place the apple, they moved their eyes to the apple,
then hearing on the towel, they looked at the other towel (Figure 11.6b). They did this be-
cause at this point in the sentence they were assuming that they were being told to put the
apple on the other towel. Then, when they heard in the box, they realized that they were
looking at the wrong place and quickly shifted their eyes to the box.
The reason participants looked first at the wrong place was that the sentence is am-
biguous. First it seems like on the towel means where the apple should be placed, but then it
becomes clear that on the towel is referring to where the apple is located. When the ambiguity
was removed by changing the sentence to Move the apple that’s on the towel to the box, par-
ticipants immediately focused their attention on the box. Figure 11.6c shows this result.
When the sentence was ambiguous, participants looked at the other towel on 55 percent of
the trials; when it wasn’t ambiguous, participants didn’t look at the other towel.
Tanenhaus also ran another condition in which he presented the two-apple display like
the one in Figure 11.7a. Because there are two apples, participants interpreted on the towel to
be indicating which apple they should move, and so looked at the apple and then at the box
(Figure 11.7b). Figure 11.7c shows that participants looked at the other towel on only about
10 percent of the trials for both place the apple on the towel (the ambiguous sentence) and place
the apple that’s on the towel (the non-ambiguous sentence) when looking at this display. The
fact that the eye movement patterns were the same for the ambiguous and non-ambiguous
sentences means that in this context the participants were not led down the garden path.
The important result of this study is that the participants’ eye movements occur as they
are reading the sentence and are influenced by the contents of the scene. Tanenhaus therefore
showed that participants take into account not only information provided by the syntactic
structure of the sentence, but also by what Tanenhaus calls non-linguistic information—
in this case, information provided by the scene. This result argues against the idea proposed

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 334 4/18/18 5:14 PM


Understanding Sentences 335

Ambiguous
Unambiguous
1.0

movements to incorrect destination


3

Proportion of trials with eye


0.8

0.6
on the in the box (3)
1
towel (2)
2
0.4

0.2
Put the apple (1)

Bruce Goldstein
0
(a) Two-apple condition (b) Eye movements (c)

➤ Figure 11.7 (a) Two-apple scene similar to the one viewed by Tanenhaus et al.’s (1995)
subjects. (b) Eye movements while comprehending the task. (c) Proportion of trials in
which eye movements were made to the towel on the right for the ambiguous sentence
(Place the apple on the towel in the box) and the unambiguous sentence (Place the apple
that’s on the towel in the box).

by the garden path model that syntactic rules are the only thing taken into account as a
sentence is initially unfolding.
Influence of Memory Load and Prior Experience with Language Consider these
two sentences:
1. The senator who spotted the reporter shouted
2. The senator who the reporter spotted shouted
These sentences have the same words, but they are arranged differently to create different
constructions. Sentence (2) is more difficult to understand, as indicated by research that
shows that readers spend longer looking at the part of the sentence following who in sen-
tences with structures like sentence (2) (Traxler et al., 2002).
To understand why sentence (2) is more difficult to understand, we need to break these
sentences down into clauses. Sentence (1) has two clauses:
Main clause: The senator shouted.
Embedded clause: The senator spotted the reporter.
The embedded clause is called embedded, because who spotted the reporter is inside the
main clause. The senator is the subject of both the main clause and the embedded clause.
This construction is called a subject-relative construction.
Sentence [2] also contains two clauses:
Main clause: The senator shouted.
Embedded clause: The reporter spotted the senator.
In this case, the senator is the subject of the main clause, as before, and is also replaced
by who in the embedded clause, but is the object in this clause. The senator is the object
because he is the target who was spotted. (The reporter is the subject of this clause, because
he did the spotting.) This construction is called an object-relative construction.

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 335 4/18/18 5:14 PM


336 CHAPTER 11 Language

One reason the object-relative construction is more difficult to understand is because


it demands more of the reader’s memory. In sentence [1] we find out who did the “spotting”
right away. It was the senator. But in sentence [2] “spotted” is near the end of the sentence so
we need to hold the early part of the sentence in memory until we find out that the reporter
did the “spotting.” This higher memory load slows down processing.
The second reason object-relative construction is more difficult to understand is that it
is more complicated, because while the senator is the subject in both the main and embedded
clauses in sentence [1], it is the subject of the main clause and the object of the embedded clause
in sentence [2]. This more complex construction not only makes the object-relative construc-
tion more difficult to process, it may be a reason that it is less prevalent in English. Subject-
relative constructions account for 65 percent of relative clause constructions (Reali &
Christiansen, 2007), and being more prevalent has an important effect—we have more exposure
to subject-relative constructions, so we have more practice understanding these constructions. In
fact, we have learned to expect that in sentences of this type, pronouns like who, which, or that
are usually followed by a verb (spotted in sentence 1). So when the pronoun isn’t followed by a
verb, as in sentence 2, we have to reconsider and adapt to the different construction. Does this
sound familiar? As we have seen from examples like the defendant examined and the horse raced,
making predictions during a sentence that turn out to be wrong slows down sentence processing.

Prediction, Prediction, Prediction…


The examples we’ve considered so far—the defendant being examined, the falling horse,
the apple on the towel, and the shouting senator—all have something in common. They
illustrate how people make predictions about what is likely to happen next in a sentence.
We predict that The defendant examined means that the defendant is going to examine
something but instead it turns out that the defendant is being examined! Oops! Our incor-
rect prediction has led us down the garden path. Similarly, we predict that The horse raced
is going to say something about how the horse raced (The horse raced faster than it ever had
before), but instead we find that raced refers to which horse was racing. We predict that we
are being asked to place the apple on the other towel, but it turns out to be otherwise.
But even though incorrect predications can temporarily throw us off track, most of the
time prediction is our friend. We are constantly making predictions about what is likely to
happen next in a sentence, and most of the time these predictions
are correct. These correct predictions help us deal with the rapid
pace of language. And prediction becomes even more important
when language is degraded, as in a poor phone connection, or is
heard in a noisy environment, or when you are trying to under-
stand someone with a foreign accent.
Gerry Altmann and Yuki Kamide (1999) did an experiment
that showed that the participants were making predictions as
they were reading a sentence by measuring their eye movements.
Figure 11.8 shows a picture similar to one from the experiment.
Participants heard either The boy will move the cake or The boy
will eat the cake while viewing this scene. For both of these sen-
tences, cake is the target object.
Participants were told to indicate whether the sentence they
read could be applied to the pictures. Altmann and Kamide
didn’t care how they responded in this task. What they did care
➤ Figure 11.8 A picture similar to one used in Altman and
about was how they were processing the information as they were
Kamide’s (1999) experiment in which they measured eye hearing the sentences.
movements that occurred as participants heard a sentence Let’s consider what might be happening as the sentences un-
while looking at the picture. fold: First, The boy will move . . . What do you think the boy is

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 336 4/18/18 5:14 PM


Understanding Text and Stories 337

going to move? The answer isn’t really clear, because the boy could move the car, the train,
the ball, or even the cake. Now consider The boy will eat . . . This one is easy. The boy will
eat the cake.
Measurement of participants’ eye movements as they were hearing these sentences indi-
cated that eye movements toward the target object (cake in this example) occurred 127 msec
after hearing the word cake for the move sentences and 87 msec before hearing the word cake
for the eat sentences. Thus, hearing the word eat causes the participant to begin looking
toward the cake before he or she even hears the word. Eat leads to the prediction that cake
will be the next word.
This kind of prediction is likely occurring constantly as we hear or read sentences. As
we will see in the next sections, predictions also play an important role in understanding
stories and having conversations.

T E ST YOUR SELF 11.2


1. What is syntax?
2. What is parsing? What are garden path sentences?
3. Describe the garden path model of parsing. Be sure you understand what a
heuristic is and the principle of late closure.
4. Describe the constraint-based approach to parsing. How does it differ from the
garden path approach?
5. Describe the following lines of evidence that support the constraint-based
approach to parsing:
➤ How meanings of words in a sentence affect parsing.
➤ How story context affects parsing.
➤ How scene context affects processing. Be sure you understand the visual
world paradigm.
➤ How memory load and predictions based on knowledge of language structure
affect parsing. Be sure you understand the difference between subject-relative
and object-relative constructions and why object-relative constructions are
harder to understand.
6. How can garden path sentences be related to prediction?
7. How is prediction important for understanding sentences?

Understanding Text and Stories


Just as sentences are more than the sum of the meanings of individual words, stories are more
than the sum of the meanings of individual sentences. In a well-written story, sentences in
one part of the story are related to sentences in other parts of the story. The reader’s task
is to use these relationships between sentences to create a coherent, understandable story.
An important part of the process of creating a coherent story is making inferences—
determining what the text means by using our knowledge to go beyond the information
provided by the text. We have seen how unconscious inference is involved in perception
(Chapter 3, page 70), and when we described the constructive nature of memory in Chap-
ter 8, we saw that we often make inferences, often without realizing it, as we retrieve mem-
ories of what has happened in the past (page 240).

Making Inferences
An early demonstration of inference in language was an experiment by John Bransford and
Marcia Johnson (1973), in which they had participants read passages and then tested them

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 337 4/18/18 5:14 PM


338 CHAPTER 11 Language

to determine what they remembered. One of the passages Bransford and Johnson’s partic-
ipants read was
John was trying to fix the birdhouse. He was pounding the nail when his father came
out to watch him and help him do the work.
After reading that passage, participants were likely to indicate that they had previously
seen the following passage: “John was using a hammer to fix the birdhouse when his father
came out to watch him and help him do the work.” They often reported seeing this pas-
sage, even though they had never read that John was using a hammer, because they inferred
that John was using a hammer from the information that he was pounding the nail. People
use a similar creative process to make a number of different types of inferences as they are
reading a text.
One role of inference is to create connections between parts of a story. This process is
typically illustrated with excerpts from narrative texts. Narrative refers to texts in which
there is a story that progresses from one event to another, although stories can also in-
clude flashbacks of events that happened earlier. An important property of any narrative
is coherence—the representation of the text in a person’s mind that creates clear relations
between parts of the text and between parts of the text and the main topic of the story.
Coherence can be created by a number of different types of inference. Consider the fol-
lowing sentence:
Riffifi, the famous poodle, won the dog show. She has now won the last three shows
she has entered.
What does she refer to? If you picked Riffifi you are using anaphoric inference—
inference that involves inferring that both shes in the second sentence refer to Riffifi. In the
previous “John and the birdhouse” example, knowing that He in the second sentence refers
to John is another example of anaphoric inference.
We usually have little trouble making anaphoric inferences because of the way informa-
tion is presented in sentences and our ability to make use of knowledge we bring to the sit-
uation. But the following quote from a New York Times interview with former heavyweight
champion George Foreman (also known for lending his name to a popular line of grills)
puts our ability to create anaphoric inference to the test.
. . . we really love to . . . go down to our ranch. . . .I take the kids out and we fish. And
then, of course, we grill them. (Stevens, 2002)
From just the structure of the sentences, we might conclude that the kids were grilled,
but we know the chances are pretty good that the fish were grilled, not George Foreman’s
children! Readers are capable of creating anaphoric inferences even under adverse condi-
tions because they add information from their knowledge of the world to the information
provided in the text.
Here’s another opportunity to use your powers of inference. What do you picture
upon reading the sentence, “William Shakespeare wrote Hamlet while he was sitting at
his desk”? It is likely that from what you know about the time Shakespeare lived that he
was probably using a quill pen (not a laptop computer!) and that his desk was made of
wood. This is an example of instrument inference. Similarly, inferring from the passage
about John and the birdhouse that he is using a hammer to pound the nails would be an
instrument inference.
Here’s another one:
Sharon took an aspirin. Her headache went away.
You probably inferred that Her was referring to Sharon, but what caused her headache to
go away? Nowhere in these two sentences is that question answered, unless you engage in

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 338 4/18/18 5:14 PM


Understanding Text and Stories 339

causal inference, in which you infer that the events described in one clause or sentence
were caused by events that occurred in a previous sentence, and infer that taking the as-
pirin made her headache go away (Goldman et al., 1999; Graesser et al., 1994; Singer
et al., 1992; van den Broek, 1994). But what can you conclude from the following two
sentences?
Sharon took a shower. Her headache went away.
You might conclude, from the fact that the headache sentence directly follows the
shower sentence, that the shower had something to do with eliminating Sharon’s head-
ache. However, the causal connection between the shower and the headache is weaker
than the connection between the aspirin and the headache in the first pair of sentences.
Making the shower–headache connection requires more work from the reader. You
might infer that the shower relaxed Sharon, or perhaps her habit of singing in the shower
was therapeutic. Or you might decide there actually isn’t much connection between the
two sentences. Going back to our discussion of how information in stories can aid in
parsing, we can also imagine that if we had been reading a story about Sharon, which
previously had described how Sharon loved taking showers because they took away her
tension, then you might be more likely to give the shower credit for eliminating her
headache.
Inferences create connections that are essential for creating coherence in texts, and
making these inferences can involve creativity by the reader. Thus, reading a text involves
more than just understanding words or sentences. It is a dynamic process that involves
transformation of the words, sentences, and sequences of sentences into a meaningful
story. Sometimes this is easy, sometimes harder, depending on the skill and intention
of both the reader and the writer (Goldman et al., 1999; Graesser et al., 1994; van den
Broek, 1994).
We have been describing the process of text comprehension so far in terms of how peo-
ple bring their knowledge to bear to infer connections between different parts of a story.
Another approach to understanding how people understand stories is to consider the na-
ture of the mental representation that people form as they read a story.

Situation Models
What do we mean when we say people form mental representations as they read a story?
One way to answer this question is to think about what’s happening in your mind as you
read. For example, the runner jumped over the hurdle probably brings up an image of a run-
ner on a track, jumping a hurdle. This image goes beyond information about phrases, sen-
tences, or paragraphs; instead, it is a representation of the situation in terms of the people,
objects, locations, and events being described in the story (Barsalou, 2008, 2009; Graesser
& Wiemer-Hastings, 1999; Zwaan, 1999).
This approach to how we understand sentences proposes that as people read or hear a
story, they create a situation model, which simulates the perceptual and motor (movement)
characteristics of the objects and actions in a story. This idea has been tested by having par-
ticipants read a sentence that describes a situation involving an object and then indicate
as quickly as possible whether a picture shows the object mentioned in the sentence. For
example, consider the following two sentences.
1. He hammered the nail into the wall.
2. He hammered the nail into the floor.
In Figure 11.9a, the horizontal nail matches the orientation that would be expected for
sentence (1), and the vertical nail matches the orientation for sentence (2). Robert Stanfield
and Rolf Zwaan (2001) presented these sentences, followed by either a matching picture or

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 339 4/18/18 5:14 PM


340 CHAPTER 11 Language

Match
No Match
900

800

Reaction time (ms)


700

600
(a) (b)

(1) “He hammered the nail into the wall.” (3) “The ranger saw the eagle in the sky.” 0
(a) Orientation (b) Shape
(2) “He hammered the nail into the floor.” (4) “The ranger saw the eagle in its nest.” experiment experiment

➤ Figure 11.9 Stimuli similar to those used in (a) Stanfield and Zwaan’s ➤ Figure 11.10 Results of Stanfield and Zwaan’s
(2001) “orientation” experiment and (b) Zwaan et al.’s (2002) “shape” (2001) and Zwaan et al.’s (2002) experiments.
experiment. Subjects heard sentences and were then asked to indicate Subjects responded “yes” more rapidly for the
whether the picture was the object mentioned in the sentence orientation, in (a), and the shape, in (b), that was
more consistent with the sentence.

a nonmatching picture. Because the pictures both show nails and the task was to indicate
whether the picture shows the object mentioned in the sentence, the correct answer was “yes”
no matter which nail was presented. However, participants responded “yes” more rapidly
when the picture’s orientation matched the situation described in the picture (Figure 11.10a).
The pictures for another experiment, involving object shape, are shown in Figure 11.9b.
The sentences for these pictures are
1. The ranger saw the eagle in the sky.
2. The ranger saw the eagle in its nest.
In this experiment, by Zwaan and coworkers (2002), the picture of an eagle with wings
outstretched elicited a faster response when it followed sentence (1) than when it followed
sentence (2). Again, reaction times were faster when the picture matched the situation de-
scribed in the sentence. This result, shown in Figure 11.10b, matches the result for the ori-
entation experiment, and both experiments support the idea that the participants created
perceptions that matched the situation as they were reading the sentences.
Another study that demonstrates how situations are represented in the mind was
carried out by Ross Metusalem and coworkers (2012), who were interested in how our
knowledge about a situation is activated in our mind as we read a story. Metusalem mea-
sured the event-related potential (ERP), which we introduced in Chapter 5 (see page 156)
as participants were reading a story. The ERP has a number of different components. One
of these components is called the N400 wave because it is a negative response that occurs
about 400 msec after a word is heard or read. One of the characteristics of the N400 re-
sponse is that the response is larger when a word in a sentence is unexpected. This is shown
in Figure 11.11. The blue record shows the N400 response to eat in the sentence The cat

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 340 4/18/18 5:14 PM


Understanding Text and Stories 341

won’t eat. But if the sentence is changed to The cat won’t bake, then the unexpected word
The cats won’t EAT . . .
bake elicits a larger response. The cats won’t BAKE . . .
Metusalem recorded the ERP as participants read scenarios such as the following: N400

Concert Scenario
The band was very popular and Joe was sure the concert would be sold out. Amazingly, 0
he was able to get a seat down in front. He couldn’t believe how close he was when he
saw the group walk out onto the (stage/guitar/barn) and start playing.
+
Three different versions of each scenario were created, using each of the words shown
in parentheses. Each participant read one version of each scenario.
If you were reading this scenario, which word would you predict to follow “he saw the
group walk out onto the . . . ”? Stage is the obvious choice, so it was called the “expected” 0 200 400 600 800
condition. Guitar doesn’t fit the passage, but since it is related to concerts and bands, it is Time (ms)
called the “event-related” word. Barn doesn’t fit the passage and is also not related to the How semantics affects N400
topic, so it is called the “event-unrelated” word.
Figure 11.12 shows the average ERPs recorded as participants read the target words. ➤ Figure 11.11 The N400 wave of the
Stage was the expected word, so there is only a small N400 response to this word. The in- ERP is affected by the meaning of
teresting result is the response to the other two words. Barn causes a large N400, because the word. It becomes larger (red line)
it isn’t related to the passage. Guitar, which doesn’t fit the passage either but is related to when the meaning of the word does
“concerts,” generates a smaller N400 than barn. not fit the rest of the sentence.
We would expect stage to generate little or no N400 response, because it fits the (Source: Osterhout et al., 1997)

meaning of the sentence. However, the fact that guitar generates a smaller N400
than barn means that this word is at least slightly activated by the concert scenario.
According to Metusalem, our knowledge about different situations is continually Expected (like stage)
Event-related (like guitar)
being accessed as we read a story, and if guitar is activated, it is also likely that other Event-unrelated (like barn)
words related to concerts, such as drums, vocalist, crowds, and beer (depending on 5μv
your experience with concerts), would also be activated. Negative
The idea that many things associated with a particular scenario are activated is
connected with the idea that we create a situation model while we are reading. What
the ERP results show is that as we read, models of the situation are activated that
include lots of details based on what we know about particular situations (also see
Kuperberg, 2013; Paczynski & Kuperberg, 2012). In addition to suggesting that 0 400
we are constantly accessing our world knowledge as we read or listen to a story, re-
sults like these also indicate that we access this knowledge rapidly, within fractions
of a second after reading a particular word.
Another aspect of the situation model approach is the idea that a reader or lis- N400
Positive
tener simulates the motor characteristics of the objects in a story. According to this
idea, a story that involves movement will result in simulation of this movement as
the person is comprehending the story. For example, reading a story about a bicycle ➤ Figure 11.12 Results of Metusalem et al.’s
elicits not only the perception of what a bicycle looks like but also properties as- (2012) experiment for the concert scenario.
sociated with movement, such as how a bicycle is propelled (by peddling) and the The key result is that the N400 response to
an event-related word like guitar (red curve)
physical exertion involved in riding the bicycle under different conditions (climb- is smaller than the response to an event-
ing hills, racing, coasting). This corresponds to the idea introduced in Chapter 9 unrelated word like barn (blue curve). This
that knowledge about a category goes beyond simply identifying a typical object suggests that even though guitar doesn’t
in that category: It also includes various properties of the object, such as how the fit in the sentence, the person’s knowledge
object is used, what it does, and sometimes even emotional responses it elicits. This that guitars are associated with concerts
way of looking at the reader’s response adds a richness to events in a story that ex- is activated.
tends beyond simply understanding what is going on (Barsalou, 2008; Fischer &
Zwaan, 2008).
We saw in Chapter 9 (page 290) how Olaf Hauk and coworkers (2004) determined
the link between movement, action words, and brain activation by measuring brain activity
using fMRI under two conditions: (1) as participants moved their right or left foot, left

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 341 4/18/18 5:14 PM


342 CHAPTER 11 Language

or right index finger, or tongue; (2) as participants read “action words” such as kick (foot
action), pick (finger or hand action), or lick (tongue action).
Hauk’s results show areas of the cortex activated by the actual movements (Figure 9.26a,
page 290) and by reading the action words (Figure 9.26b). The activation is more extensive
for actual movements, but the activation caused by reading the words occurs in approxi-
mately the same areas of the brain. For example, leg words and leg movements elicit activity
near the brain’s center line, whereas arm words and finger movements elicit activity farther
from the center line. This link between action words and activation of action areas in the
brain suggests a physiological mechanism that may be related to creating situation models
as a person reads a story.

The overall conclusion from research on how people comprehend stories is that un-
derstanding a text or story is a creative and dynamic process. Understanding stories in-
volves understanding sentences by determining how words are organized into phrases; then
determining the relationships between sentences, often using inference to link sentences in
one part of a story to sentences in another part; and finally, creating mental representations
or simulations that involve both perceptual and motor properties of objects and events in
the story. As we will now see, a creative and dynamic process also occurs when two or more
people are having a conversation.

Having Conversations
Although language can be produced by a single person talking alone, as when a person recites
a monologue or gives a speech, the most common form of language production is conver-
sation—two or more people talking with one another. Conversation, or dialogue, provides
another example of a cognitive skill that seems simple but contains underlying complexities.
Having a conversation is often easy, especially if you know the person you are talking
with and have talked with them before. But sometimes conversations can become more diffi-
cult, especially if you’re talking with someone for the first time. Why should this be so? One
answer is that when talking to someone else, it helps if you have some awareness of what the
other person knows about the topic you’re discussing. Even when both people bring similar
knowledge to a conversation, it helps if speakers take steps to guide their listeners through
the conversation. One way of achieving this is by following the given–new contract.

The Given–New Contract


The given–new contract states that a speaker should construct sentences so that they in-
clude two kinds of information: (1) given information—information that the listener al-
ready knows; and (2) new information—information that the listener is hearing for the first
time (Haviland & Clark, 1974). For example, consider the following two sentences.
Sentence 1. Ed was given an alligator for his birthday.
Given information (from previous conversation): Ed had a birthday.
New information: He got an alligator.

Sentence 2. The alligator was his favorite present.


Given information (from sentence [1]): Ed got an alligator.
New information: It was his favorite present.
Notice how the new information in the first sentence becomes the given information in the
second sentence.

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 342 4/18/18 5:14 PM


Having Conversations 343

Susan Haviland and Herbert Clark (1974) demonstrated the consequences of not follow-
ing the given–new contract by presenting pairs of sentences and asking participants to press
a button when they thought they understood the second sentence in each pair. They found
that it took longer for participants to comprehend the second sentence in pairs like this one:
We checked the picnic supplies.
The beer was warm.
than it took to comprehend the second sentence in pairs like this one:
We got some beer out of the trunk.
The beer was warm.
The reason comprehending the second sentence in the first pair takes longer is that the
given information (that there were picnic supplies) does not mention beer. Thus, the reader
or listener needs to make an inference that beer was among the picnic supplies. This infer-
ence is not required in the second pair because the first sentence includes the information
that there is beer in the trunk.
The idea of given and new captures the collaborative nature of conversations. Herbert
Clark (1996) sees collaboration as being central to the understanding of language. Describ-
ing language as “a form of joint action,” Clark proposes that understanding this joint action
involves considering not only providing given and new information but also taking into
account the knowledge, beliefs, and assumptions that the other person brings to the conver-
sation, a process called establishing common ground (Isaacs & Clark, 1987).

Common Ground: Taking the Other Person into Account


Common ground is the mental knowledge and beliefs shared among conversational parties
(Brown-Schmidt & Hanna, 2011). The key word in this definition is shared. When two
people are having a conversation, each person may have some idea of what the other person
knows about what they are discussing, and as the conversation continues, the amount of
shared information increases. What’s especially important about this sharing of informa-
tion is that each person is not only accumulating information about the topic at hand (like
knowing in our discussion of given–new that the beer was in the trunk), but they are also
accumulating information about what the other person knows. Having a conversation is,
after all, about you and the other person, and conversations go more smoothly if you know
as much as possible about the other person.
One example of how having a successful conversation depends on understanding what
the other person knows is how doctors who communicate well with patients usually assume
that their patients have limited knowledge of physiology and medical terminology. Taking
this into account, these doctors use lay terminology, such as heart attack rather than myo-
cardial infarction. However, if the doctor realizes that the patient is also a doctor, he or she
knows that it is permissible to use medical terminology (Issacs & Clark, 1987).

Establishing Common Ground


Going beyond knowing how much knowledge people bring to a conversation, a great deal
of research on common ground considers how people establish common ground during a
conversation. One way this is studied is to analyze transcripts of conversations. The follow-
ing example is a conversation among three students who are trying to recall a scene from the
movie The Secret of Roan Inish (Brennan et al., 2010):
Leah: um . . . then he gets punished or whatever?
Dale: what was that, a wreath or—
Leah: yeah it was some kind of browny—

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 343 4/18/18 5:14 PM


344 CHAPTER 11 Language

Adam: yeah it was some kind of straw thing or something


Leah: mhm
Dale: around his neck
Leah: so that everybody knew what he did or something?
Adam: straw wreath
Dale: yeah
One thing this conversation reveals is that people often talk not in whole sentences
but in fragments. It also illustrates how a conversation unfolds in an orderly way, as the
conversation reconstructs the events the people are talking about. In the end, they come to
a conclusion that everyone shares.
Another way of studying how common ground is established is through the referential
communication task, a task in which two people are exchanging information in a conver-
sation, when this information involves reference—identifying something by naming or de-
scribing it (Yule, 1997). An example of a referential communication task is provided by an
experiment by P. Stellman and Susan Brennan (1993; described in Brennan et al., 2010) in
which two partners, A (the director) and B (the matcher), had identical sets of 12 cards with
pictures of abstract geometrical objects. A arranged the cards in a specific order. B’s task was
to arrange her cards in the same order. Because B couldn’t see A’s cards, she had to determine
the identity of each card through conversation. Here is an example of a conversation that
resulted in B understanding the identity of one of A’s cards.
Trial 1:
A: ah boy this one ah boy alright it looks kinda like on the right top there’s a square
that looks diagonal
B: uh huh
A: and you have sort of another like rectangle shape, the—like a triangle, angled, and
on the bottom it’s ah I don’t know what that is, glass shaped
B: alright I think I got it
A: it’s almost like a person kind of in a weird way
B: yeah like like a monk praying or something
A: right yeah good great
B: alright I got it. (they go on to the next card)
After all of the cards are identified and placed
“A bat” in the correct order, A rearranges the cards and the
“The candle” partners repeat the task two more times. Trials 2 and
“The anchor”
“The rocket ship” 3, shown below, indicate that the conversations for
“The Olympic torch” these trials are much briefer:
“The Canada symbol”
“The symmetrical one” Trial 2:
“Shapes on top of shapes”
“The one with all the shapes” B: 9 is that monk praying
“The bird diving straight down”
“The airplane flying straight down” A: yup (they go on to the next card)
“The angel upside down with sleeves”
“The man jumping in the air with bell bottoms on” Trial 3:
A: number 4 is the monk
➤ Figure 11.13 An abstract picture used by Stellman and Brennan (1993) to B: ok (they go on to the next card)
study common ground. Each description was proposed by a different pair
of participants in a referential communication task. What this means is that the partners have
(Source: From Brannan, Galati and Kuhlen, 2010. Originally from Stellmann and established common ground. They know what
Brennan, 1993.) each other knows and can refer to the cards by the

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 344 4/18/18 5:14 PM


Having Conversations 345

names they have created. Figure 11.13 shows another of the geometrical objects used in
this task (not the “monk” object) and the descriptive names established by 13 different
A–B pairs. It is clear that it doesn’t matter what the object is called, just so both partners
have the same information about the object. And once common ground is established,
conversations flow much more smoothly.
The process of creating common ground results in entrainment—synchronization
between the two partners. In this example, synchronization occurs in the naming of the
objects on the cards. But entrainment also occurs in other ways as well. Conversational
partners can establish similar gestures, speaking rate, body positions, and sometimes pro-
nunciation (Brennan et al., 2010). We now consider how conversational partners can end
up coordinating their grammatical constructions—an effect called syntactic coordination.

Syntactic Coordination
When two people exchange statements in a conversation, it is common for them to use
similar grammatical constructions. Kathryn Bock (1990) provides the following example,
taken from a recorded conversation between a bank robber and his lookout, which was in-
tercepted by a ham radio operator as the robber was removing the equivalent of $1 million
from a bank vault in England.
Robber: “… you’ve got to hear and witness it to realize how bad it is.”
Lookout: “You have got to experience exactly the same position as me, mate, to under-
stand how I feel.” (from Schenkein, 1980, p. 22)
Bock has added italics to illustrate how the lookout copied the form of the robber’s
statement. This copying of form reflects a phenomenon called syntactic priming—hearing
a statement with a particular syntactic construction increases the chances that a sentence
will be produced with the same construction. Syntactic priming is important because it can
lead people to coordinate the grammatical form of their statements during a conversation.
Holly Branigan and coworkers (2000) illustrated syntactic priming by using the following
procedure to set up a give-and-take between two people.

METHOD Syntactic Priming


In a syntactic priming experiment, two people engage in a conversation, and the ex-
perimenter determines whether a specific grammatical construction used by one per-
son causes the other person to use the same construction. In Branigan’s experiment,
participants were told that the experiment was about how people communicate when
they can’t see each other. They thought they were working with another participant
who was on the other side of a screen (the person on the left in Figure 11.14a). In re-
ality, person A, on the left, was working with the experimenter, and person B, on the
right, was the participant in the experiment.
Person A began the experiment by making a priming statement, as shown on the
left of Figure 11.14a. This statement was in one of the following two forms:

The girl gave the book to the boy.


or
The girl gave the boy the book.
The participant responded by locating, among the cards laid out on the table, the
matching card that corresponded to the confederate’s statement, as shown on the right
in Figure 11.14a. The participant then picked the top card from the response pile on

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 345 4/18/18 5:14 PM


346 CHAPTER 11 Language

The girl gave


the boy a book.

Matching cards

Participant picks matching


card that matches statement.

Confederate reads statement.


(a)

The father gave his


daughter a present.

Response cards
Participant picks response
card and describes picture
to confederate.
Confederate listening
(b)

➤ Figure 11.14 The Branigan et al. (2000) experiment. (a) The subject (right) picks, from
the cards laid out on the table, a card with a picture that matches the statement read by
the confederate (left). (b) The participant then takes a card from the pile of response cards
and describes the picture on the response card to the confederate. This is the key part of
the experiment, because the question is whether the participant on the right will match the
syntactic construction used by the confederate on the left.

the corner of the table, looked at the picture on the card, and described it to the
confederate. The question is: How does B phrase his or her description? Saying “The
father gave his daughter a present” to describe the picture in Figure 11.14b matches
A’s syntax in this example. Saying “The father gave a present to his daughter” would
not match the syntax. If the syntax does match, as in the example in Figure 11.14b, we
can conclude that syntactic priming has occurred.

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 346 4/18/18 5:14 PM


Having Conversations 347

Branigan found that on 78 percent of the trials, the form of B’s description matched
the form of A’s priming statement. This supports the idea that speakers are sensitive to the
linguistic behavior of other speakers and adjust their behaviors to match. This coordination
of syntactic form between speakers reduces the computational load involved in creating a
conversation because it is easier to copy the form of someone else’s sentence than it is to
create your own form from scratch.

Let’s summarize what we have said about conversations: Conversations are dynamic
and rapid, but a number of processes make them easier. On the semantic side, people take
other people’s knowledge into account and help establish common ground if necessary. On
the syntactic side, people coordinate or align the syntactic form of their statements. This
makes speaking easier and frees up resources to deal with the task of alternating between
understanding and producing messages that is the hallmark of successful conversations.
But this discussion illustrates just a few things about conversations and grounding.
There are lots of things going on. Think about what a person has to do to maintain a con-
versation. First, the person has to plan what he or she is going to say, while simultaneously
taking in the other person’s input and doing what is necessary to understand it. Part of
understanding what the other person means involves theory of mind, the ability to under-
stand what others feel, think, or believe (Corballis, 2017), and also the ability to interpret
and react to the person’s gestures, facial expressions, tone of voice, and other things that
provide cues to meaning (Brennan et al., 2010; Horton & Brennan, 2016). Finally, just to
make things even more interesting, each person in a conversation has to anticipate when it
is appropriate to enter the conversation, a process called “turn taking” (Garrod & Pickering,
2015; Levinson, 2016). Thus, communication by conversation goes beyond simply analyz-
ing strings of words or sequences of sentences. It also involves all of the complexities inherent
in social interactions, yet somehow, we are able to do it, often effortlessly.

SOMETHING TO CONSIDER
Music and Language
Diana Deutsch (2010) relates a story about an experience she had while testing tape loops for a
lecture on music and the brain. As she was doing something, with the taped phrase “sometimes
I behave so strangely” repeating over and over in the background, she was suddenly surprised to
hear a strange woman singing. After determining that no one else was there, she realized that she
was hearing her own voice from the tape loop, but the repeating words on the tape had morphed
into song in her mind. Deutsch found that other people also experienced this speech to song
effect, and concluded that there is a close connection between song and speech.

Music and Language: Similarities and Differences


Connections between music and language extend beyond song and speech to music and lan-
guage in general. Emotion is a central player in both. Music has been called the “language of
emotion,” and people often state that emotion is one of the main reasons they listen to music.
Emotion in language is often created by prosody—the pattern of intonation and rhythm in
spoken language (Banziger & Scherer, 2005; Heffner & Slevc, 2015). Orators and actors cre-
ate emotions by varying the pitch of their voice and the cadence of their words, speaking softly
to express tenderness or loudly to emphasize a point or to capture the audience’s attention.
But emotion also illustrates a difference between music and language. Music cre-
ates emotion through sounds that in themselves have no meaning. But listening to a film
score leaves no doubt that these meaningless sounds can create meaning, with emo-
tions following close behind (Boltz, 2004). Language, on the other hand, creates emo-
tions using meaningful words, so the emotions elicited by I hate you and I love you are
caused directly by the meanings of the words hate and love. Recently the introduction of
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 347 4/18/18 5:14 PM


348 CHAPTER 11 Language

2
4

Twin-kle, twin-kle lit - tle star, How I won-der what you are.
➤ Figure 11.15 Examples of emojis,
which, like words, are used to
➤ Figure 11.16 The first line of “Twinkle, twinkle, little star.”
indicate emotions in language.
The emoji on the right, “face with (Source: From Goldstein, Sensation and Perception 10e, Figure 12.24, page 305)
tears of joy,” was named Word
of the Year by the Oxford English emoji’s—pictographs like the ones in Figure 11.15, have provided another way of indicating
Dictionary. While words and emotions in written language (Evans, 2017). The emoji on the far right, which is called “face
pictures can indicate emotions with tears of joy,” was named the “Word of the Year” for 2015 by the Oxford English Dictionary.
to someone hearing or reading An important similarity between music and language is that they both combine
language, sounds elicit emotions in elements—tones for music and words for language—to create structured sequences. These
a person who is listening to music.
sequences are organized into phrases and are governed by syntax—rules for arranging these
components (Deutsch, 2010). Nonetheless, it is obvious that there are differences between
creating phrases in response to instrumental music and creating phrases when reading a book
or having a conversation. Although music and language both unfold over time and have syn-
tax, the rules for combining notes and words are very different. Notes are combined based on
their sound, with some sounds going together better than others. But words are combined
based on their meanings. There are no analogues for nouns and verbs in music, and there is no
“who did what to whom” in music (Patel, 2013).
More items could be added to both the “similarities” and “differences” lists, but the
overall message is that although there are important differences in the outcomes and mech-
anisms associated with music and language, they are also sim-
ilar in many respects. We will explore two of these areas of
overlap in more detail: expectations and brain mechanisms.

Expectations in Music and Language


We have discussed the role of expectations in language by
describing how readers and listeners are constantly making
predictions about what might be coming next as a sentence
unfolds. A similar situation exists in music.
To illustrate expectation in music, let’s consider how the notes
Nearby Distant of a melody are organized around the note associated with the
key key
(a) composition’s key, which is called the tonic (Krumhansl, 1985).
For example, C is the tonic for the key of C and its associated scale:
C, D, E, F, G, A, B, C. Organizing pitches around a tonic creates a
2
framework within which a listener generates expectations about
Response

what might be coming next. One common expectation is that a


song that begins with the tonic will end on the tonic. This effect,
1
which is called return to the tonic, occurs in “Twinkle, Twin-
kle, Little Star,” which begins and ends on a C in the example in
(b)
Figure 11.16. To illustrate this, try singing the first line of
“Twinkle, Twinkle, Little Star,” but stop at “you,” just before the
song has returned to the tonic. The effect of pausing just before
➤ Figure 11.17 (a) The musical phrase heard by participants in the end of the phrase, which could be called a violation of musical
Patel and coworkers’ (1998) experiment. The location of the
target chord is indicated by the downward-pointing arrow. The
syntax, is unsettling and has us longing for the final note that will
chord in the music staff is the “In key” chord. The other two bring us back to the tonic.
chords were inserted in that position for the “Nearby key” and Another violation of musical syntax occurs when an unlikely
“Distant key” conditions. (b) ERP responses to the target chord. note or chord is inserted that doesn’t seem to “fit” in the tonality
Solid 5 in-key; Dotted 5 near-key; Dashed 5 far-key. of the melody. Aniruddh Patel and coworkers (1998) had par-
(Source: From Goldstein, Sensation and Perception, 10e, Fig. 12.28, p. 307) ticipants listen to a musical phrase like the one in Figure 11.17,
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 348 4/18/18 5:14 PM


Having Conversations 349

which contained a target chord, indicated by the arrow above the music. There were three
The cats won’t EAT . . .
different targets: (1) an “In key” chord, that fit the piece, shown on the musical staff; (2) a The cats won’t EATING . . .
“Nearby key” chord, that didn’t fit as well; and (3) a “Distant key” chord that fit even less

well. In the behavioral part of the experiment, listeners judged the phrase as acceptable 80
percent of the time when it contained the in-key chord; 49 percent when it contained the
nearby-key chord; and 28 percent when it contained the distant-key chord. One way of stat- 0
ing this result is that listeners were judging how “grammatically correct” each version was.
In the physiological part of the experiment, Patel used the event-related potential +
(ERP) to measure the brain’s response to violations of syntax. When we discussed the ERP
in connection with the “Concert Scenario” experiment on page 341, we saw that the N400 P600
component of the ERP became larger in response to words that didn’t fit into a sentence,
like bake in The cat won’t bake. Patel focused on another component of the ERP, called the 0 200 400 600 800
P600, because it is a positive response that occurs 600 msec after the onset of a word. One Time (ms)
of the properties of P600 is that it becomes larger in response to violations of syntax. For How syntax affects P600
example, the blue curve in Figure 11.18 shows the response that occurs after the word eat
in the sentence The cats won’t eat (blue curve). The response to this grammatically correct ➤ Figure 11.18 The P600 wave of
word shows no P600 response. However, the red curve, elicited by the word eating, which the ERP is affected by grammar. It
is grammatically incorrect in the sentence The cats won’t eating, has a large P600 response. becomes larger (red line) when a
Patel measured large P600 responses when his participants listened to sentences that grammatically incorrect form is used.
contained violations of grammar, as in the example in Figure 11.18. He then measured the (Source: Osterhout et al., 1997)
ERP response to each of the three musical targets in Figure 11.17. Figure 11.17b shows that
there is no P600 response when the phrase contained the in-key chord (black record), but
that there are P600 responses for the two other chords, with the bigger response for the
more out-of-key chord (red record). Patel concluded from this result that music, like lan-
guage, has a syntax that influences how we react to it. Other studies following Patel’s have
also found that electrical responses like P600 occur to violations of musical syntax (Koelsch,
2005; Koelsch et al., 2000; Maess et al., 2001; Vuust et al., 2009).
Looking at the idea of musical syntax in a larger context, we can assume that as we listen
to music, we are focusing on the notes we are hearing, while (without thinking about it) we
have expectations about what is going to happen next. You can demonstrate your ability to pre-
dict what is going to happen by listening to a song, preferably instrumental and not too fast. As
you listen, try guessing what notes or phrases are coming next. This is easier for some
compositions, such as ones with repeating themes, but even when there isn’t repeti-
Aphasic participants
tion, what’s coming usually isn’t a surprise. What’s particularly compelling about this
exercise is that it often works even for music you are hearing for the first time. Just as Controls
1.0
our perception of visual scenes we are seeing for the first time is influenced by our past
experiences in perceiving the environment, our perception of music we are hearing for 0.9
the first time can be influenced by our history of listening to music. 0.8
0.7
Performance

0.6
Do Music and Language Overlap in the Brain?
0.5
Patel’s demonstration of similar electrical responses to violations of musical syn-
0.4
tax and language syntax shows that both music and language involve similar
0.3
processes. But we can’t say, based on this finding alone, that music and language
0.2
involve overlapping areas of the brain.
Early research on brain mechanisms of music and language involved study- 0.1
ing patients with brain damage due to stroke. Patel and coworkers (2008) studied 0
Musical Language
a group of stroke patients who had Broca’s aphasia—difficulty in understanding syntax syntax
sentences with complex syntax (see page 39). These patients and a group of controls
were given (1) a language task that involved understanding syntactically complex ➤ Figure 11.19 Performance on language
sentences; and (2) a music task that involved detecting the off-key chords in a se- syntax tasks and musical syntax tasks for
quence of chords. The results of these tests, shown in Figure 11.19, indicate that aphasic participants and control participants.
the patients performed very poorly on the language task compared to the controls (Source: From Patel et al., 2008)
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 349 4/18/18 5:14 PM


350 CHAPTER 11 Language

(right pair of bars), and that the patients also performed more poorly on the music task (left
pair of bars). Two things that are noteworthy about these results is that (1) there is a connec-
tion between poor performance on the language task and poor performance on the music
task, which suggests a connection between the two; and (2) the deficits in the music task
for aphasia patients were small compared to the deficits in the language tasks. These results
support a connection between brain mechanisms involved in music and language, but not
necessarily a strong connection.
Other neuropsychological studies have provided evidence that different brain mecha-
nisms are involved in music and language. For example, patients who are born having prob-
lems with music perception—a condition called congenital amusia—have severe problems
with tasks such as discriminating between simple melodies or recognizing common tunes.
Yet these individuals often have normal language abilities (Patel, 2013).
Cases have also been observed that demonstrate differences in the opposite direction.
Robert Slevc and coworkers (2016) tested a 64-year-old woman who had Broca’s aphasia
caused by a stroke. She had trouble comprehending complex sentences and had great diffi-
culty putting words together into meaningful thoughts. Yet she was able to detect out-of-
key chords in sequences like those presented by Patel (Figure 11.16a). Neuropsychological
research therefore provides evidence for separate brain mechanisms for music and language
(also see Peretz & Hyde, 2003).
Brain mechanisms have also been studied using neuroimaging. Some of these studies
have shown that different areas are involved in music and language (Fedorenko et al., 2012).
Other studies have shown that music and language activate overlapping areas of the brain.
For example, Broca’s area, which is involved in language syntax, is also activated by music
(Fitch & Martins, 2014; Koelsch, 2005, 2011; Peretz & Zatorre, 2005).
It has also been suggested that even if neuroimaging identifies an area that is activated by
both music and language, this doesn’t necessarily mean that music and language are activating
➤ Figure 11.20 Illustration of the the same neurons within that area. There is evidence that music and language activation can
idea that two different capacities, occur within an area but involve different neural networks (Figure 11.20) (Peretz et al., 2015).
such as language and music, might The conclusion from all of these studies—both behavioral and physiological—is that
activate the same structure in the while there is evidence for the separateness of music and language in the brain, especially from
brain (indicated by the circle), neuropsychological studies, there is also evidence for overlap between music and language,
but when looked at closely, each
mostly from behavioral and brain scanning studies. Thus, it seems that music and language
capacity could activate different
networks (red or black) within
are related, but the overlap isn’t complete, as might be expected when you consider the dif-
the structure. The small circles ference between reading your cognitive psychology textbook and listening to your favorite
represent neurons, and the lines music. Clearly, our knowledge of the relation between music and language is still a “work in
represent connections. progress,” which, as it continues, will add to our understanding of both music and language.

T E ST YOUR SELF 11.3


1. What does the “fixing the birdhouse” experiment indicate about inference?
2. What is coherence? Describe the different types of inference that help achieve
coherence.
3. What is the assumption behind a situation model? Describe what the following
evidence tells us about this approach to understanding stories: (a) reaction
times for pictures that match or don’t match the orientations or shapes of
objects in a story; (b) brain activation for action words compared to actual
action; (c) predictions based on the situation.
4. What is the given–new contract?
5. What is common ground? How is it established in a conversation?
6. What does the “abstract picture” experiment tell us about how common
ground is established?

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 350 4/18/18 5:14 PM


Chapter Summary 351

7. Once common ground is established, what happens?


8. What is syntactic coordination? Describe the syntactic priming experiment that
was used to demonstrate syntactic coordination.
9. What are some similarities and differences between music and language?
10. What is the tonic? What does return to the tonic say about expectation in
music?
11. Describe Patel’s experiment in which he measured the P600 response of the
ERP to syntactic violations. What does his result say about connections between
music and language?
12. What is the evidence for and against the idea that music and language activate
overlapping areas of the brain?
13. If music and language both activate the same brain area, can we say with
certainty that they share neural mechanisms?

CHAPTER SUMMARY
1. Language is a system of communication that uses sounds have biased dominance, some have balanced dominance.
or symbols that enable us to express our feelings, thoughts, The type of dominance, combined with the word’s context,
ideas, and experiences. It is hierarchical and rule-based. influences which meaning is accessed.
2. Modern research in the psychology of language blossomed 9. Syntax is the structure of a sentence. Parsing is the process
in the 1950s and 1960s, with the advent of the cognitive by which words in a sentence are grouped into phrases.
revolution. One of the central events in the cognitive Grouping into phrases is a major determinant of the
revolution was Chomsky’s critique of Skinner’s behavioristic meaning of a sentence. This process has been studied by
analysis of language. using garden path sentences that illustrate the effect of
3. All the words a person knows are his or her lexicon. temporary ambiguity.
Semantics is the meaning of language. 10. Two mechanisms proposed to explain parsing are (1) the
4. The ability to understand words in a sentence is influenced garden path model and (2) the constraint-based approach.
by word frequency. This has been demonstrated using the The garden path model emphasizes how syntactic principles
lexical decision task and by measuring eye movements. such as late closure determine how a sentence is parsed. The
constraint-based approach states that semantics, syntax, and
5. The pronunciation of words is variable, which can make it other factors operate simultaneously to determine parsing.
difficult to perceive words when they are heard out of context. The constraint-based approach is supported by (a) the way
6. There are often no silences between words during words with different meanings affect the interpretation
normal speech, which gives rise to the problem of speech of a sentence, (b) how story context influences parsing,
segmentation. Past experience with words, the word’s (c) how scene context, studied using the visual world
context, statistical properties of language, and knowledge of paradigm, influences parsing, and (d) how the effect of
the meanings of words help solve this problem. memory load and prior experience with language influences
7. Lexical ambiguity refers to the fact that a word can have understandability.
more than one meaning. Tanenhaus used the lexical priming 11. Coherence enables us to understand stories. Coherence
technique to show that (1) multiple meanings of ambiguous is largely determined by inference. Three major types of
words are accessed immediately after they are heard, and (2) inference are anaphoric, instrumental, and causal.
the “correct” meaning for the sentence’s context is identified 12. The situation model approach to text comprehension states
within 200 msec. that people represent the situation in a story in terms of the
8. The relative frequency of the meanings of ambiguous words people, objects, locations, and events that are being described
is described in terms of meaning dominance. Some words in the story.

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 351 4/18/18 5:14 PM


352 CHAPTER 11 Language

13. Measurements of brain activity have demonstrated how by syntactic coordination—how people’s grammatical
similar areas of the cortex are activated by reading action constructions become coordinated.
words and by actual movements. 18. Music and language are similar in a number of ways.
14. Experiments that measure the ERP response to passages There is a close relation between song and speech, music
show that many things associated with the passage are and language both cause emotion, and both consist of
activated as the passage is being read. organized sequences.
15. Conversations, which involve give-and-take between two 19. There are important differences between music and
or more people, are made easier by procedures that involve language. They create emotions in different ways, and
cooperation between participants in a conversation. These rules for combining tones and words are different. The
procedures include the given–new contract and establishing most important difference is based on the fact that words
common ground. have meanings.
16. Establishing common ground has been studied by analyzing 20. Expectation occurs in both music and language. These
transcripts of conversations. As common ground is parallel effects have been demonstrated by experiments using
established, conversations become more efficient. the ERP to assess the effect of syntactic violations in both
17. The process of creating common ground results in music and language.
entrainment—synchronization between the people in the 21. There is evidence for separateness and overlap of music and
conversation. One demonstration of entrainment is provided language in the brain.

THINK ABOUT IT
1. How do the ideas of coherence and connection apply to the kitchen sink.” Can you think of other examples? If
some of the movies you have seen lately? Have you found you speak a language other than English, can you identify
that some movies are easy to understand whereas others are figures of speech in that language that might be baffling to
more difficult? In the movies that are easy to understand, English-speakers?
does one thing appear to follow from another, whereas in the 4. Newspaper headlines are often good sources of ambiguous
more difficult ones, some things seem to be left out? What phrases. For example, consider the following actual
is the difference in the “mental work” needed to determine headlines: “Milk Drinkers Are Turning to Powder,” “Iraqi
what is going on in these two kinds of movies? (You can also Head Seeks Arms,” “Farm Bill Dies in House,” and “Squad
apply this kind of analysis to books you have read.) Helps Dog Bite Victim.” See if you can find examples of
2. The next time you are able to eavesdrop on a conversation, ambiguous headlines in the newspaper, and try to figure out
notice how the give-and-take among participants follows what it is that makes the headlines ambiguous.
(or does not follow) the given–new contract. Also, notice 5. People often say things in an indirect way, but listeners can
how people change topics and how that affects the flow of often still understand what they mean. See if you can detect
the conversation. Finally, see if you can find any evidence of these indirect statements in normal conversation. (Examples:
syntactic priming. One way to “eavesdrop” is to be part of “Do you want to turn left here?” to mean “I think you
a conversation that includes at least two other people. But should turn left here”; “Is it cold in here?” to mean “Please
don’t forget to say something every so often! close the window.”)
3. One of the interesting things about languages is the use of 6. It is a common observation that people are more irritated
“figures of speech,” which people who know the language by nearby cell phone conversations than by conversations
understand but nonnative speakers often find baffling. between two people who are physically present. Why do you
One example is the sentence “He brought everything but think this occurs? (See Emberson et al., 2010, for one answer.)

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 352 4/18/18 5:14 PM


CogLab Experiments 353

KEY TERMS
Anaphoric inference, 338 Hierarchical nature of language, 323 Referential communication task, 344
Balanced dominance, 328 Inference, 337 Return to the tonic, 348
Biased dominance, 328 Instrument inference, 338 Rule-based nature of language, 323
Broca’s aphasia, 349 Language, 322 Semantics, 325
Causal inference, 339 Late closure, 332 Situation model, 339
Coherence, 338 Lexical ambiguity, 327 Speech segmentation, 326
Common ground, 343 Lexical decision task, 325 Subject-relative construction, 335
Congenital amusia, 350 Lexical priming, 327 Syntactic coordination, 345
Constraint-based approach to Lexical semantics, 325 Syntactic priming, 345
parsing, 332 Lexicon, 325 Syntax, 331
Emoji, 348 Meaning dominance, 328 Temporary ambiguity, 331
Entrainment, 345 Narrative, 338 Theory of mind, 347
Garden path model of parsing, 332 Object-relative construction, 335 Tonic, 348
Garden path sentence, 331 Parsing, 331 Visual world paradigm, 333
Given–new contract, 342 Prosody, 347 Word frequency effect, 325
Heuristic, 332 Psycholinguistics, 324 Word frequency, 325

COGLAB EXPERIMENTS Numbers in parentheses refer to the experiment number in CogLab.

Categorical Perception: Lexical Decision (41) Word Superiority (44)


Discrimination (39) Neighborhood Size
Categorical Perception: Effect (43)
Identification (40)

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch11_ptg01.indd 353 4/19/18 7:26 PM


Bruce Goldstein

People solve problems in many different ways. We will see that sometimes solving a problem
involves hard word and methodological analysis, while other times solutions to problems can
appear to happen in a flash of insight. We will also see that sometimes letting your mind “rest,”
perhaps to wander or daydream, as this woman might be doing as she sits by the canal, can play
an important role in leading to creative solutions to problems.

Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203

08271_ch12_ptg01.indd 354 4/18/18 5:32 PM

You might also like