Chapter 11 - Language
Chapter 11 - Language
11
What is Language? Understanding Text and Stories
The Creativity of Human Language Making Inferences
The Universal Need to Communicate with Language Situation Models
Studying Language
Having Conversations
Understanding Words: A few Complications The Given–New Contract
Not All Words Are Created Equal: Differences in Frequency Common Ground: Taking the Other Person into Account
The Pronunciation of Words Is Variable Establishing Common Ground
There Are No Silences Between Words in Normal Conversation Syntactic Coordination
➤ Method: Syntactic Priming
Understanding Ambiguous Words
Accessing Multiple Meanings Something to Consider: Music and Language
➤ Method: Lexical Priming Music and Language: Similarities and Differences
Frequency Influences Which Meanings Are Activated Expectation in Music and Language
➤ TEST YOURSELF 11.1 Do Music and Language Overlap in the Brain?
➤ TEST YOURSELF 11.3
Understanding Sentences
Parsing: Making Sense of Sentences
The Garden Path Model of Parsing
The Constraint-Based Approach to Parsing
Influence of Word Meaning
Influence of Story Context
CHAPTER SUMMARY
Influence of Scene Context
Influence of Memory Load and Prior Experience THINK ABOUT IT
with Language KEY TERMS
Prediction, Prediction, Prediction…
COGLAB EXPERIMENTS
➤ TEST YOURSELF 11.2
321
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
What is Language?
The following definition of language captures the idea that the ability to string sounds
and words together opens the door to a world of communication: Language is a system of
communication using sounds or symbols that enables us to express our feelings, thoughts, ideas,
and experiences.
But this definition doesn’t go far enough, because it conceivably could include some
forms of animal communication. Cats “meow” when their food dish is empty; monkeys
have a repertoire of “calls” that stand for things such as “danger” or “greeting”; bees perform
a “waggle dance” at the hive that indicates the location of flowers. But as impressive as some
animal communication is, it is much more rigid than human language. Animals use a limited
number of sounds or gestures to communicate about a limited number of things that are
important for survival. In contrast, humans use a wide variety of signals, which can be com-
bined in countless ways. One of the properties of human language is, therefore, creativity.
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
(“My car is over there”) to messages that have perhaps never been previously written or
uttered in the entire history of the world (“My trip with Zelda, my cousin from California
who lost her job in February, was on Groundhog Day”).
Language makes it possible to create new and unique sentences because it has a structure
that is both hierarchical and governed by rules. The hierarchical nature of language means
that it consists of a series of small components that can be combined to form larger units.
For example, words can be combined to create phrases, which in turn can create sentences,
which themselves can become components of a story. The rule-based nature of language
means that these components can be arranged in certain ways (“What is my cat saying?” is
permissible in English), but not in other ways (“Cat my saying is what?” is not). These two
properties—a hierarchical structure and rules—endow humans with the ability to go far
beyond the fixed calls and signs of animals to communicate whatever we want to express.
Studying Language
Language has fascinated thinkers for thousands of years, dating back to the ancient Greek
philosophers Socrates, Plato, and Aristotle (350–450 BCE), and before. The modern sci-
entific study of language traces its beginnings to the work of Paul Broca (1861) and Carl
Wernicke (1874). Broca’s study of patients with brain damage led to the proposal that an
area in the frontal lobe (Broca’s area) is responsible for the production of language. Wernicke
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
proposed that an area in the temporal lobe (Wernicke’s area) is responsible for comprehen-
sion. We described Broca’s and Wernicke’s observations in Chapter 2 (see page 39), and also
noted that modern research has shown that the situation is quite a bit more complicated
than just two language areas in the brain (see page 44).
In this chapter, we will focus not on the connection between language and the brain,
but on behavioral research on the cognitive mechanisms of language. We take up the story
of behavioral research on language in the 1950s when behaviorism was still the dominant
approach in psychology (see page 10). In 1957, B. F. Skinner, the main proponent of be-
haviorism, published a book called Verbal Behavior, in which he proposed that language is
learned through reinforcement. According to this idea, just as children learn appropriate
behavior by being rewarded for “good” behavior and punished for “bad” behavior, chil-
dren learn language by being rewarded for using correct language and punished (or not
rewarded) for using incorrect language.
In the same year, linguist Noam Chomsky (1957) published a book titled Syntactic
Structures, in which he proposed that human language is coded in the genes. According to
this idea, just as humans are genetically programmed to walk, they are also programmed to ac-
quire and use language. Chomsky concluded that despite the wide variations that exist across
languages, the underlying basis of all language is similar. Most important for our purposes,
Chomsky saw studying language as a way to study the properties of the mind and therefore
disagreed with the behaviorist idea that the mind is not a valid topic of study for psychology.
Chomsky’s disagreement with behaviorism led him to publish a scathing review of Skin-
ner’s Verbal Behavior in 1959. In his review, he presented arguments against the behaviorist
idea that language can be explained in terms of reinforcements and without reference to the
mind. One of Chomsky’s most persuasive arguments was that as children learn language, they
produce sentences that they have never heard and that have never been reinforced. (A classic
example of a sentence that has been created by many children and that is unlikely to have been
taught or reinforced by parents is “I hate you, Mommy.”) Chomsky’s criticism of behaviorism
was an important event in the cognitive revolution and began changing the focus of the young
discipline of psycholinguistics, the field concerned with the psychological study of language.
The goal of psycholinguistics is to discover the psychological processes by which hu-
mans acquire and process language (Clark & Van der Wege, 2002; Gleason & Ratner, 1998;
Miller, 1965). The four major concerns of psycholinguistics are as follows:
1. Comprehension. How do people understand spoken and written language? This
includes how people process language sounds; how they understand words, sen-
tences, and stories expressed in writing, speech, or sign language; and how people
have conversations with one another.
2. Representation. How is language represented in the mind? This includes how peo-
ple group words together into phrases to create meaningful sentences and how they
make connections between different parts of a story.
3. Speech production. How do people produce language? This includes the physical
processes of speech production and the mental processes that occur as a person
creates speech.
4. Acquisition. How do people learn language? This includes not only how children
learn language but also how people learn additional languages, either as children or
later in life.
Because of the vast scope of psycholinguistics, we are going to restrict our attention to
the first two of these concerns, describing research on comprehension and representation,
which together explain how we understand language. The plan is to start with words, then
look at how words are combined to create sentences, then how sentences create “stories”
that we read, hear, or create ourselves as we have conversations with other people.
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
age. The average frequencies were 5.1 times per million for the low-frequency words
and 122.3 times per million for the high-frequency words. For example, the low-
Fixation duration (msec)
300
frequency target word in the sentence “The slow waltz captured their attention” is
waltz, and replacing waltz with the high-frequency word music creates the sentence
“The slow music captured their attention.” The duration of the first fixation on the 200
words, shown in Figure 11.1a, was 37 msec longer for low-frequency words com-
pared to high-frequency words. (Sometimes a word might be fixated more than once,
as when the person reads a word and then looks back at it in response to what the per- 100
son has read later in the sentence.) Figure 11.1b shows that the total gaze duration—
the sum of all fixations made on a word, was 87 msec longer for low-frequency
words than for high-frequency words. One reason for these longer fixations on low- 0
(a) First fixation (b) Gaze
frequency words could be that the readers needed more time to access the meaning
of the low-frequency words. The word frequency effect, therefore, demonstrates how
➤ Figure 11.1 Fixation durations on low-
our past experience with words influences our ability to access their meaning.
frequency and high-frequency words in
sentences measured by Rayner and Duffy
The Pronunciation of Words Is Variable (1986). (a) First fixation durations; (b) Total
gaze duration. In both cases, fixation times
Another problem that makes understanding words challenging is that not every- are longer for low-frequency words.
one pronounces words in the same way. People talk with different accents and at (Source: Based on data from Rayner and Duffy, 1986,
different speeds, and, most important, people often take a relaxed approach to Table 2, p. 195)
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
pronouncing words when they are speaking naturally. For example, if you were talking to a
friend, how would you say “Did you go to class today?” Would you say “Did you” or “Di-
joo”? You have your own ways of producing various words and phonemes, and other people
have theirs. For example, analysis of how people actually speak has determined that there
are 50 different ways to pronounce the word the (Waldrop, 1988).
So how do we deal with this? One way is to use the context within which the word
appears. The fact that context helps is illustrated by what happens when you hear a word
taken out of context. Irwin Pollack and J. M. Pickett (1964) showed that words are more
difficult to understand when taken out of context and presented alone, by recording the
conversations of participants who sat in a room waiting for the experiment to begin. When
the participants were then presented with recordings of single words taken out of their own
conversations, they could identify only half the words, even though they were listening to
their own voices! The fact that the people in this experiment were able to identify words as
they were talking to each other, but couldn’t identify the same words when the words were
isolated, illustrates that their ability to perceive words in conversations is aided by the con-
text provided by the words and sentences that make up the conversation.
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
is similar to the familiar “I scream, you scream, we all scream for ice cream” that many
people learn as children. The sound stimuli for “I scream” and “ice cream” are identical, so
the different organizations must be achieved by the meaning of the sentence in which these
words appear.
So our ability to hear and understand spoken words is affected by (1) how frequently
we have encountered a word in the past; (2) the context in which the words appear; (3) our
knowledge of statistical regularities of our language; and (4) our knowledge of word mean-
ings. There’s an important message here—all of these things involve knowledge achieved by
learning/experience with language. Sound familiar? Yes, this continues the theme of the im-
portance of knowledge that occurs throughout this chapter as we consider how we under-
stand sentences, stories, and conversations. But we aren’t through with words yet, because
just to make things more interesting, many words have multiple meanings.
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
(c) tin (food container) is not dominant; (d) TIN (metal) is dominant
TIN (metal) is dominant
This difference between biased and balanced dominance influences the way people
access the meanings of words as they read them. This has been demonstrated in experiments
in which researchers measure eye movements as participants read sentences and note the
fixation time for an ambiguous word and also for a control word with just one meaning that
replaces the ambiguous word in the sentence. Consider the following sentence, in which the
ambiguous word cast has balanced dominance.
The cast worked into the night. (control word: cook)
As a person reads the word cast, both meanings of cast are activated, because cast
(member of a play) and cast (plaster cast) are equally likely. Because the two meanings
compete for activation, the person looks longer at cast than at the control word cook,
which has only one meaning as a noun. Eventually, when the reader reaches the end of the
sentence, the meaning becomes clear (Duffy et al., 1988; Rayner & Frazier, 1989; Traxler,
2012) (Figure 11.3a).
But consider the following, with the ambiguous word tin:
The tin was bright and shiny. (control word: gold)
In this case, people read the biased ambiguous word tin just as quickly as the control word,
because only the dominant meaning of tin is activated, and the meaning of tin as a metal is
accessed quickly (Figure 11.3b).
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
But meaning frequency isn’t the only factor that determines the accessibility of the
meaning of a word. Context can play a role as well. Consider, for example, the following
sentence, in which the context added before the ambiguous word tin indicates the less-fre-
quent meaning of tin:
The miners went to the store and saw that they had beans in a tin. (control word: cup)
In this case, when the person reaches the word tin, the less frequent meaning is activated
at increased strength because of the prior context, and the more frequent meaning of tin is
activated as well. Thus, in this example, as with the first sentence that contained the word
cast, two meanings are activated, so the person looks longer at tin (Figure 11.3c).
Finally, consider the sentence below, in which the context indicates the more frequent
meaning of tin:
The miners went under the mountain to look for tin. (control word: gold)
In this example, only the dominant meaning of tin is activated, so tin is read rapidly
(Figure 11.3d).
We’ve seen in this chapter that the process of accessing the meaning of a word is com-
plicated and is influenced by multiple factors. First, the frequency of a word determines
how long it takes to process its meaning. Second, if a word has more than one meaning,
the context of the sentence influences which meaning we access. Finally, our ability to
access the correct meaning of a word depends on both the word’s frequency and, for words
with more than one meaning, a combination of meaning dominance and context. So sim-
ply identifying, recognizing, and knowing the meaning of individual words is a complex
and impressive feat. However, except in rare situations in which words operate alone—as
in exclamations such as Stop! or Wait!—words are used with other words to form sen-
tences, and, as we will see next, sentences add another level of complexity to understand-
ing language.
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
Understanding Sentences
When we considered words, we saw how sentences create context, which makes it possible
to (1) deal with the variability of word pronunciations, (2) perceive individual words in
a continuous stream of speech, and (3) determine the meanings of ambiguous words.But
now we are going to go beyond just considering how sentences help us understand words,
by asking how combining words into sentences creates meaning.
To understand how we determine the meaning of a sentence, we need to consider
syntax—the structure of a sentence—and the study of syntax involves discovering cues that
languages provide that show how words in a sentence relate to one another (Traxler, 2012).
To start, let’s think about what happens as we hear a sentence. Speech unfolds over time,
with one word following another. This sequential process is central to understanding sen-
tences, because one way to think about sentences is meaning unfolding over time.
What mental processes are occurring as a person hears a sentence? A simple way to
answer this question would be to picture the meaning as being created by adding up the
meanings of each word as they occur. But this idea runs into trouble right away when we
consider that some words have more than one meaning and also that a sequence of words
can have more than one meaning. The key to determining how strings of words create
meaning is to consider how meaning is created by the grouping of words into phrases—a
process called parsing.
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
Influence of Scene Context Parsing of a sentence is influenced not only by the con-
text provided by stories but also by context provided by scenes. To investigate how ob-
serving objects in a scene can influence how we interpret a sentence, Michael Tanenhaus
and coworkers (1995) developed a technique called the visual world paradigm, which
involves determining how information in a scene can influence how a sentence is pro-
cessed. Participants’ eye movements were measured as they saw objects on a table, as in
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
Ambiguous
Unambiguous
1.0
on the towel (2)
0.6
in the box (3, 4)
1
Put the apple (1)
0.4
0.2
Bruce Goldstein
0
(a) One-apple condition (b) Eye movements (c)
➤ Figure 11.6 (a) One-apple scene similar to the one viewed by Tanenhaus et al.’s (1995)
participants. (b) Eye movements made while comprehending the task. (c) Proportion
of trials in which eye movements were made to the towel on the right for the ambiguous
sentence. (Place the apple on the towel in the box) and the unambiguous sentence (Place
the apple that’s on the towel in the box).
Figure 11.6a. As participants looked at this display, they were told to carry out the fol-
lowing instructions:
Place the apple on the towel in the box.
When participants heard the phrase Place the apple, they moved their eyes to the apple,
then hearing on the towel, they looked at the other towel (Figure 11.6b). They did this be-
cause at this point in the sentence they were assuming that they were being told to put the
apple on the other towel. Then, when they heard in the box, they realized that they were
looking at the wrong place and quickly shifted their eyes to the box.
The reason participants looked first at the wrong place was that the sentence is am-
biguous. First it seems like on the towel means where the apple should be placed, but then it
becomes clear that on the towel is referring to where the apple is located. When the ambiguity
was removed by changing the sentence to Move the apple that’s on the towel to the box, par-
ticipants immediately focused their attention on the box. Figure 11.6c shows this result.
When the sentence was ambiguous, participants looked at the other towel on 55 percent of
the trials; when it wasn’t ambiguous, participants didn’t look at the other towel.
Tanenhaus also ran another condition in which he presented the two-apple display like
the one in Figure 11.7a. Because there are two apples, participants interpreted on the towel to
be indicating which apple they should move, and so looked at the apple and then at the box
(Figure 11.7b). Figure 11.7c shows that participants looked at the other towel on only about
10 percent of the trials for both place the apple on the towel (the ambiguous sentence) and place
the apple that’s on the towel (the non-ambiguous sentence) when looking at this display. The
fact that the eye movement patterns were the same for the ambiguous and non-ambiguous
sentences means that in this context the participants were not led down the garden path.
The important result of this study is that the participants’ eye movements occur as they
are reading the sentence and are influenced by the contents of the scene. Tanenhaus therefore
showed that participants take into account not only information provided by the syntactic
structure of the sentence, but also by what Tanenhaus calls non-linguistic information—
in this case, information provided by the scene. This result argues against the idea proposed
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
Ambiguous
Unambiguous
1.0
0.6
on the in the box (3)
1
towel (2)
2
0.4
0.2
Put the apple (1)
Bruce Goldstein
0
(a) Two-apple condition (b) Eye movements (c)
➤ Figure 11.7 (a) Two-apple scene similar to the one viewed by Tanenhaus et al.’s (1995)
subjects. (b) Eye movements while comprehending the task. (c) Proportion of trials in
which eye movements were made to the towel on the right for the ambiguous sentence
(Place the apple on the towel in the box) and the unambiguous sentence (Place the apple
that’s on the towel in the box).
by the garden path model that syntactic rules are the only thing taken into account as a
sentence is initially unfolding.
Influence of Memory Load and Prior Experience with Language Consider these
two sentences:
1. The senator who spotted the reporter shouted
2. The senator who the reporter spotted shouted
These sentences have the same words, but they are arranged differently to create different
constructions. Sentence (2) is more difficult to understand, as indicated by research that
shows that readers spend longer looking at the part of the sentence following who in sen-
tences with structures like sentence (2) (Traxler et al., 2002).
To understand why sentence (2) is more difficult to understand, we need to break these
sentences down into clauses. Sentence (1) has two clauses:
Main clause: The senator shouted.
Embedded clause: The senator spotted the reporter.
The embedded clause is called embedded, because who spotted the reporter is inside the
main clause. The senator is the subject of both the main clause and the embedded clause.
This construction is called a subject-relative construction.
Sentence [2] also contains two clauses:
Main clause: The senator shouted.
Embedded clause: The reporter spotted the senator.
In this case, the senator is the subject of the main clause, as before, and is also replaced
by who in the embedded clause, but is the object in this clause. The senator is the object
because he is the target who was spotted. (The reporter is the subject of this clause, because
he did the spotting.) This construction is called an object-relative construction.
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
going to move? The answer isn’t really clear, because the boy could move the car, the train,
the ball, or even the cake. Now consider The boy will eat . . . This one is easy. The boy will
eat the cake.
Measurement of participants’ eye movements as they were hearing these sentences indi-
cated that eye movements toward the target object (cake in this example) occurred 127 msec
after hearing the word cake for the move sentences and 87 msec before hearing the word cake
for the eat sentences. Thus, hearing the word eat causes the participant to begin looking
toward the cake before he or she even hears the word. Eat leads to the prediction that cake
will be the next word.
This kind of prediction is likely occurring constantly as we hear or read sentences. As
we will see in the next sections, predictions also play an important role in understanding
stories and having conversations.
Making Inferences
An early demonstration of inference in language was an experiment by John Bransford and
Marcia Johnson (1973), in which they had participants read passages and then tested them
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
to determine what they remembered. One of the passages Bransford and Johnson’s partic-
ipants read was
John was trying to fix the birdhouse. He was pounding the nail when his father came
out to watch him and help him do the work.
After reading that passage, participants were likely to indicate that they had previously
seen the following passage: “John was using a hammer to fix the birdhouse when his father
came out to watch him and help him do the work.” They often reported seeing this pas-
sage, even though they had never read that John was using a hammer, because they inferred
that John was using a hammer from the information that he was pounding the nail. People
use a similar creative process to make a number of different types of inferences as they are
reading a text.
One role of inference is to create connections between parts of a story. This process is
typically illustrated with excerpts from narrative texts. Narrative refers to texts in which
there is a story that progresses from one event to another, although stories can also in-
clude flashbacks of events that happened earlier. An important property of any narrative
is coherence—the representation of the text in a person’s mind that creates clear relations
between parts of the text and between parts of the text and the main topic of the story.
Coherence can be created by a number of different types of inference. Consider the fol-
lowing sentence:
Riffifi, the famous poodle, won the dog show. She has now won the last three shows
she has entered.
What does she refer to? If you picked Riffifi you are using anaphoric inference—
inference that involves inferring that both shes in the second sentence refer to Riffifi. In the
previous “John and the birdhouse” example, knowing that He in the second sentence refers
to John is another example of anaphoric inference.
We usually have little trouble making anaphoric inferences because of the way informa-
tion is presented in sentences and our ability to make use of knowledge we bring to the sit-
uation. But the following quote from a New York Times interview with former heavyweight
champion George Foreman (also known for lending his name to a popular line of grills)
puts our ability to create anaphoric inference to the test.
. . . we really love to . . . go down to our ranch. . . .I take the kids out and we fish. And
then, of course, we grill them. (Stevens, 2002)
From just the structure of the sentences, we might conclude that the kids were grilled,
but we know the chances are pretty good that the fish were grilled, not George Foreman’s
children! Readers are capable of creating anaphoric inferences even under adverse condi-
tions because they add information from their knowledge of the world to the information
provided in the text.
Here’s another opportunity to use your powers of inference. What do you picture
upon reading the sentence, “William Shakespeare wrote Hamlet while he was sitting at
his desk”? It is likely that from what you know about the time Shakespeare lived that he
was probably using a quill pen (not a laptop computer!) and that his desk was made of
wood. This is an example of instrument inference. Similarly, inferring from the passage
about John and the birdhouse that he is using a hammer to pound the nails would be an
instrument inference.
Here’s another one:
Sharon took an aspirin. Her headache went away.
You probably inferred that Her was referring to Sharon, but what caused her headache to
go away? Nowhere in these two sentences is that question answered, unless you engage in
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
causal inference, in which you infer that the events described in one clause or sentence
were caused by events that occurred in a previous sentence, and infer that taking the as-
pirin made her headache go away (Goldman et al., 1999; Graesser et al., 1994; Singer
et al., 1992; van den Broek, 1994). But what can you conclude from the following two
sentences?
Sharon took a shower. Her headache went away.
You might conclude, from the fact that the headache sentence directly follows the
shower sentence, that the shower had something to do with eliminating Sharon’s head-
ache. However, the causal connection between the shower and the headache is weaker
than the connection between the aspirin and the headache in the first pair of sentences.
Making the shower–headache connection requires more work from the reader. You
might infer that the shower relaxed Sharon, or perhaps her habit of singing in the shower
was therapeutic. Or you might decide there actually isn’t much connection between the
two sentences. Going back to our discussion of how information in stories can aid in
parsing, we can also imagine that if we had been reading a story about Sharon, which
previously had described how Sharon loved taking showers because they took away her
tension, then you might be more likely to give the shower credit for eliminating her
headache.
Inferences create connections that are essential for creating coherence in texts, and
making these inferences can involve creativity by the reader. Thus, reading a text involves
more than just understanding words or sentences. It is a dynamic process that involves
transformation of the words, sentences, and sequences of sentences into a meaningful
story. Sometimes this is easy, sometimes harder, depending on the skill and intention
of both the reader and the writer (Goldman et al., 1999; Graesser et al., 1994; van den
Broek, 1994).
We have been describing the process of text comprehension so far in terms of how peo-
ple bring their knowledge to bear to infer connections between different parts of a story.
Another approach to understanding how people understand stories is to consider the na-
ture of the mental representation that people form as they read a story.
Situation Models
What do we mean when we say people form mental representations as they read a story?
One way to answer this question is to think about what’s happening in your mind as you
read. For example, the runner jumped over the hurdle probably brings up an image of a run-
ner on a track, jumping a hurdle. This image goes beyond information about phrases, sen-
tences, or paragraphs; instead, it is a representation of the situation in terms of the people,
objects, locations, and events being described in the story (Barsalou, 2008, 2009; Graesser
& Wiemer-Hastings, 1999; Zwaan, 1999).
This approach to how we understand sentences proposes that as people read or hear a
story, they create a situation model, which simulates the perceptual and motor (movement)
characteristics of the objects and actions in a story. This idea has been tested by having par-
ticipants read a sentence that describes a situation involving an object and then indicate
as quickly as possible whether a picture shows the object mentioned in the sentence. For
example, consider the following two sentences.
1. He hammered the nail into the wall.
2. He hammered the nail into the floor.
In Figure 11.9a, the horizontal nail matches the orientation that would be expected for
sentence (1), and the vertical nail matches the orientation for sentence (2). Robert Stanfield
and Rolf Zwaan (2001) presented these sentences, followed by either a matching picture or
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
Match
No Match
900
800
600
(a) (b)
(1) “He hammered the nail into the wall.” (3) “The ranger saw the eagle in the sky.” 0
(a) Orientation (b) Shape
(2) “He hammered the nail into the floor.” (4) “The ranger saw the eagle in its nest.” experiment experiment
➤ Figure 11.9 Stimuli similar to those used in (a) Stanfield and Zwaan’s ➤ Figure 11.10 Results of Stanfield and Zwaan’s
(2001) “orientation” experiment and (b) Zwaan et al.’s (2002) “shape” (2001) and Zwaan et al.’s (2002) experiments.
experiment. Subjects heard sentences and were then asked to indicate Subjects responded “yes” more rapidly for the
whether the picture was the object mentioned in the sentence orientation, in (a), and the shape, in (b), that was
more consistent with the sentence.
a nonmatching picture. Because the pictures both show nails and the task was to indicate
whether the picture shows the object mentioned in the sentence, the correct answer was “yes”
no matter which nail was presented. However, participants responded “yes” more rapidly
when the picture’s orientation matched the situation described in the picture (Figure 11.10a).
The pictures for another experiment, involving object shape, are shown in Figure 11.9b.
The sentences for these pictures are
1. The ranger saw the eagle in the sky.
2. The ranger saw the eagle in its nest.
In this experiment, by Zwaan and coworkers (2002), the picture of an eagle with wings
outstretched elicited a faster response when it followed sentence (1) than when it followed
sentence (2). Again, reaction times were faster when the picture matched the situation de-
scribed in the sentence. This result, shown in Figure 11.10b, matches the result for the ori-
entation experiment, and both experiments support the idea that the participants created
perceptions that matched the situation as they were reading the sentences.
Another study that demonstrates how situations are represented in the mind was
carried out by Ross Metusalem and coworkers (2012), who were interested in how our
knowledge about a situation is activated in our mind as we read a story. Metusalem mea-
sured the event-related potential (ERP), which we introduced in Chapter 5 (see page 156)
as participants were reading a story. The ERP has a number of different components. One
of these components is called the N400 wave because it is a negative response that occurs
about 400 msec after a word is heard or read. One of the characteristics of the N400 re-
sponse is that the response is larger when a word in a sentence is unexpected. This is shown
in Figure 11.11. The blue record shows the N400 response to eat in the sentence The cat
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
won’t eat. But if the sentence is changed to The cat won’t bake, then the unexpected word
The cats won’t EAT . . .
bake elicits a larger response. The cats won’t BAKE . . .
Metusalem recorded the ERP as participants read scenarios such as the following: N400
–
Concert Scenario
The band was very popular and Joe was sure the concert would be sold out. Amazingly, 0
he was able to get a seat down in front. He couldn’t believe how close he was when he
saw the group walk out onto the (stage/guitar/barn) and start playing.
+
Three different versions of each scenario were created, using each of the words shown
in parentheses. Each participant read one version of each scenario.
If you were reading this scenario, which word would you predict to follow “he saw the
group walk out onto the . . . ”? Stage is the obvious choice, so it was called the “expected” 0 200 400 600 800
condition. Guitar doesn’t fit the passage, but since it is related to concerts and bands, it is Time (ms)
called the “event-related” word. Barn doesn’t fit the passage and is also not related to the How semantics affects N400
topic, so it is called the “event-unrelated” word.
Figure 11.12 shows the average ERPs recorded as participants read the target words. ➤ Figure 11.11 The N400 wave of the
Stage was the expected word, so there is only a small N400 response to this word. The in- ERP is affected by the meaning of
teresting result is the response to the other two words. Barn causes a large N400, because the word. It becomes larger (red line)
it isn’t related to the passage. Guitar, which doesn’t fit the passage either but is related to when the meaning of the word does
“concerts,” generates a smaller N400 than barn. not fit the rest of the sentence.
We would expect stage to generate little or no N400 response, because it fits the (Source: Osterhout et al., 1997)
meaning of the sentence. However, the fact that guitar generates a smaller N400
than barn means that this word is at least slightly activated by the concert scenario.
According to Metusalem, our knowledge about different situations is continually Expected (like stage)
Event-related (like guitar)
being accessed as we read a story, and if guitar is activated, it is also likely that other Event-unrelated (like barn)
words related to concerts, such as drums, vocalist, crowds, and beer (depending on 5μv
your experience with concerts), would also be activated. Negative
The idea that many things associated with a particular scenario are activated is
connected with the idea that we create a situation model while we are reading. What
the ERP results show is that as we read, models of the situation are activated that
include lots of details based on what we know about particular situations (also see
Kuperberg, 2013; Paczynski & Kuperberg, 2012). In addition to suggesting that 0 400
we are constantly accessing our world knowledge as we read or listen to a story, re-
sults like these also indicate that we access this knowledge rapidly, within fractions
of a second after reading a particular word.
Another aspect of the situation model approach is the idea that a reader or lis- N400
Positive
tener simulates the motor characteristics of the objects in a story. According to this
idea, a story that involves movement will result in simulation of this movement as
the person is comprehending the story. For example, reading a story about a bicycle ➤ Figure 11.12 Results of Metusalem et al.’s
elicits not only the perception of what a bicycle looks like but also properties as- (2012) experiment for the concert scenario.
sociated with movement, such as how a bicycle is propelled (by peddling) and the The key result is that the N400 response to
an event-related word like guitar (red curve)
physical exertion involved in riding the bicycle under different conditions (climb- is smaller than the response to an event-
ing hills, racing, coasting). This corresponds to the idea introduced in Chapter 9 unrelated word like barn (blue curve). This
that knowledge about a category goes beyond simply identifying a typical object suggests that even though guitar doesn’t
in that category: It also includes various properties of the object, such as how the fit in the sentence, the person’s knowledge
object is used, what it does, and sometimes even emotional responses it elicits. This that guitars are associated with concerts
way of looking at the reader’s response adds a richness to events in a story that ex- is activated.
tends beyond simply understanding what is going on (Barsalou, 2008; Fischer &
Zwaan, 2008).
We saw in Chapter 9 (page 290) how Olaf Hauk and coworkers (2004) determined
the link between movement, action words, and brain activation by measuring brain activity
using fMRI under two conditions: (1) as participants moved their right or left foot, left
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
or right index finger, or tongue; (2) as participants read “action words” such as kick (foot
action), pick (finger or hand action), or lick (tongue action).
Hauk’s results show areas of the cortex activated by the actual movements (Figure 9.26a,
page 290) and by reading the action words (Figure 9.26b). The activation is more extensive
for actual movements, but the activation caused by reading the words occurs in approxi-
mately the same areas of the brain. For example, leg words and leg movements elicit activity
near the brain’s center line, whereas arm words and finger movements elicit activity farther
from the center line. This link between action words and activation of action areas in the
brain suggests a physiological mechanism that may be related to creating situation models
as a person reads a story.
The overall conclusion from research on how people comprehend stories is that un-
derstanding a text or story is a creative and dynamic process. Understanding stories in-
volves understanding sentences by determining how words are organized into phrases; then
determining the relationships between sentences, often using inference to link sentences in
one part of a story to sentences in another part; and finally, creating mental representations
or simulations that involve both perceptual and motor properties of objects and events in
the story. As we will now see, a creative and dynamic process also occurs when two or more
people are having a conversation.
Having Conversations
Although language can be produced by a single person talking alone, as when a person recites
a monologue or gives a speech, the most common form of language production is conver-
sation—two or more people talking with one another. Conversation, or dialogue, provides
another example of a cognitive skill that seems simple but contains underlying complexities.
Having a conversation is often easy, especially if you know the person you are talking
with and have talked with them before. But sometimes conversations can become more diffi-
cult, especially if you’re talking with someone for the first time. Why should this be so? One
answer is that when talking to someone else, it helps if you have some awareness of what the
other person knows about the topic you’re discussing. Even when both people bring similar
knowledge to a conversation, it helps if speakers take steps to guide their listeners through
the conversation. One way of achieving this is by following the given–new contract.
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
Susan Haviland and Herbert Clark (1974) demonstrated the consequences of not follow-
ing the given–new contract by presenting pairs of sentences and asking participants to press
a button when they thought they understood the second sentence in each pair. They found
that it took longer for participants to comprehend the second sentence in pairs like this one:
We checked the picnic supplies.
The beer was warm.
than it took to comprehend the second sentence in pairs like this one:
We got some beer out of the trunk.
The beer was warm.
The reason comprehending the second sentence in the first pair takes longer is that the
given information (that there were picnic supplies) does not mention beer. Thus, the reader
or listener needs to make an inference that beer was among the picnic supplies. This infer-
ence is not required in the second pair because the first sentence includes the information
that there is beer in the trunk.
The idea of given and new captures the collaborative nature of conversations. Herbert
Clark (1996) sees collaboration as being central to the understanding of language. Describ-
ing language as “a form of joint action,” Clark proposes that understanding this joint action
involves considering not only providing given and new information but also taking into
account the knowledge, beliefs, and assumptions that the other person brings to the conver-
sation, a process called establishing common ground (Isaacs & Clark, 1987).
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
names they have created. Figure 11.13 shows another of the geometrical objects used in
this task (not the “monk” object) and the descriptive names established by 13 different
A–B pairs. It is clear that it doesn’t matter what the object is called, just so both partners
have the same information about the object. And once common ground is established,
conversations flow much more smoothly.
The process of creating common ground results in entrainment—synchronization
between the two partners. In this example, synchronization occurs in the naming of the
objects on the cards. But entrainment also occurs in other ways as well. Conversational
partners can establish similar gestures, speaking rate, body positions, and sometimes pro-
nunciation (Brennan et al., 2010). We now consider how conversational partners can end
up coordinating their grammatical constructions—an effect called syntactic coordination.
Syntactic Coordination
When two people exchange statements in a conversation, it is common for them to use
similar grammatical constructions. Kathryn Bock (1990) provides the following example,
taken from a recorded conversation between a bank robber and his lookout, which was in-
tercepted by a ham radio operator as the robber was removing the equivalent of $1 million
from a bank vault in England.
Robber: “… you’ve got to hear and witness it to realize how bad it is.”
Lookout: “You have got to experience exactly the same position as me, mate, to under-
stand how I feel.” (from Schenkein, 1980, p. 22)
Bock has added italics to illustrate how the lookout copied the form of the robber’s
statement. This copying of form reflects a phenomenon called syntactic priming—hearing
a statement with a particular syntactic construction increases the chances that a sentence
will be produced with the same construction. Syntactic priming is important because it can
lead people to coordinate the grammatical form of their statements during a conversation.
Holly Branigan and coworkers (2000) illustrated syntactic priming by using the following
procedure to set up a give-and-take between two people.
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
Matching cards
Response cards
Participant picks response
card and describes picture
to confederate.
Confederate listening
(b)
➤ Figure 11.14 The Branigan et al. (2000) experiment. (a) The subject (right) picks, from
the cards laid out on the table, a card with a picture that matches the statement read by
the confederate (left). (b) The participant then takes a card from the pile of response cards
and describes the picture on the response card to the confederate. This is the key part of
the experiment, because the question is whether the participant on the right will match the
syntactic construction used by the confederate on the left.
the corner of the table, looked at the picture on the card, and described it to the
confederate. The question is: How does B phrase his or her description? Saying “The
father gave his daughter a present” to describe the picture in Figure 11.14b matches
A’s syntax in this example. Saying “The father gave a present to his daughter” would
not match the syntax. If the syntax does match, as in the example in Figure 11.14b, we
can conclude that syntactic priming has occurred.
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
Branigan found that on 78 percent of the trials, the form of B’s description matched
the form of A’s priming statement. This supports the idea that speakers are sensitive to the
linguistic behavior of other speakers and adjust their behaviors to match. This coordination
of syntactic form between speakers reduces the computational load involved in creating a
conversation because it is easier to copy the form of someone else’s sentence than it is to
create your own form from scratch.
Let’s summarize what we have said about conversations: Conversations are dynamic
and rapid, but a number of processes make them easier. On the semantic side, people take
other people’s knowledge into account and help establish common ground if necessary. On
the syntactic side, people coordinate or align the syntactic form of their statements. This
makes speaking easier and frees up resources to deal with the task of alternating between
understanding and producing messages that is the hallmark of successful conversations.
But this discussion illustrates just a few things about conversations and grounding.
There are lots of things going on. Think about what a person has to do to maintain a con-
versation. First, the person has to plan what he or she is going to say, while simultaneously
taking in the other person’s input and doing what is necessary to understand it. Part of
understanding what the other person means involves theory of mind, the ability to under-
stand what others feel, think, or believe (Corballis, 2017), and also the ability to interpret
and react to the person’s gestures, facial expressions, tone of voice, and other things that
provide cues to meaning (Brennan et al., 2010; Horton & Brennan, 2016). Finally, just to
make things even more interesting, each person in a conversation has to anticipate when it
is appropriate to enter the conversation, a process called “turn taking” (Garrod & Pickering,
2015; Levinson, 2016). Thus, communication by conversation goes beyond simply analyz-
ing strings of words or sequences of sentences. It also involves all of the complexities inherent
in social interactions, yet somehow, we are able to do it, often effortlessly.
SOMETHING TO CONSIDER
Music and Language
Diana Deutsch (2010) relates a story about an experience she had while testing tape loops for a
lecture on music and the brain. As she was doing something, with the taped phrase “sometimes
I behave so strangely” repeating over and over in the background, she was suddenly surprised to
hear a strange woman singing. After determining that no one else was there, she realized that she
was hearing her own voice from the tape loop, but the repeating words on the tape had morphed
into song in her mind. Deutsch found that other people also experienced this speech to song
effect, and concluded that there is a close connection between song and speech.
2
4
Twin-kle, twin-kle lit - tle star, How I won-der what you are.
➤ Figure 11.15 Examples of emojis,
which, like words, are used to
➤ Figure 11.16 The first line of “Twinkle, twinkle, little star.”
indicate emotions in language.
The emoji on the right, “face with (Source: From Goldstein, Sensation and Perception 10e, Figure 12.24, page 305)
tears of joy,” was named Word
of the Year by the Oxford English emoji’s—pictographs like the ones in Figure 11.15, have provided another way of indicating
Dictionary. While words and emotions in written language (Evans, 2017). The emoji on the far right, which is called “face
pictures can indicate emotions with tears of joy,” was named the “Word of the Year” for 2015 by the Oxford English Dictionary.
to someone hearing or reading An important similarity between music and language is that they both combine
language, sounds elicit emotions in elements—tones for music and words for language—to create structured sequences. These
a person who is listening to music.
sequences are organized into phrases and are governed by syntax—rules for arranging these
components (Deutsch, 2010). Nonetheless, it is obvious that there are differences between
creating phrases in response to instrumental music and creating phrases when reading a book
or having a conversation. Although music and language both unfold over time and have syn-
tax, the rules for combining notes and words are very different. Notes are combined based on
their sound, with some sounds going together better than others. But words are combined
based on their meanings. There are no analogues for nouns and verbs in music, and there is no
“who did what to whom” in music (Patel, 2013).
More items could be added to both the “similarities” and “differences” lists, but the
overall message is that although there are important differences in the outcomes and mech-
anisms associated with music and language, they are also sim-
ilar in many respects. We will explore two of these areas of
overlap in more detail: expectations and brain mechanisms.
which contained a target chord, indicated by the arrow above the music. There were three
The cats won’t EAT . . .
different targets: (1) an “In key” chord, that fit the piece, shown on the musical staff; (2) a The cats won’t EATING . . .
“Nearby key” chord, that didn’t fit as well; and (3) a “Distant key” chord that fit even less
–
well. In the behavioral part of the experiment, listeners judged the phrase as acceptable 80
percent of the time when it contained the in-key chord; 49 percent when it contained the
nearby-key chord; and 28 percent when it contained the distant-key chord. One way of stat- 0
ing this result is that listeners were judging how “grammatically correct” each version was.
In the physiological part of the experiment, Patel used the event-related potential +
(ERP) to measure the brain’s response to violations of syntax. When we discussed the ERP
in connection with the “Concert Scenario” experiment on page 341, we saw that the N400 P600
component of the ERP became larger in response to words that didn’t fit into a sentence,
like bake in The cat won’t bake. Patel focused on another component of the ERP, called the 0 200 400 600 800
P600, because it is a positive response that occurs 600 msec after the onset of a word. One Time (ms)
of the properties of P600 is that it becomes larger in response to violations of syntax. For How syntax affects P600
example, the blue curve in Figure 11.18 shows the response that occurs after the word eat
in the sentence The cats won’t eat (blue curve). The response to this grammatically correct ➤ Figure 11.18 The P600 wave of
word shows no P600 response. However, the red curve, elicited by the word eating, which the ERP is affected by grammar. It
is grammatically incorrect in the sentence The cats won’t eating, has a large P600 response. becomes larger (red line) when a
Patel measured large P600 responses when his participants listened to sentences that grammatically incorrect form is used.
contained violations of grammar, as in the example in Figure 11.18. He then measured the (Source: Osterhout et al., 1997)
ERP response to each of the three musical targets in Figure 11.17. Figure 11.17b shows that
there is no P600 response when the phrase contained the in-key chord (black record), but
that there are P600 responses for the two other chords, with the bigger response for the
more out-of-key chord (red record). Patel concluded from this result that music, like lan-
guage, has a syntax that influences how we react to it. Other studies following Patel’s have
also found that electrical responses like P600 occur to violations of musical syntax (Koelsch,
2005; Koelsch et al., 2000; Maess et al., 2001; Vuust et al., 2009).
Looking at the idea of musical syntax in a larger context, we can assume that as we listen
to music, we are focusing on the notes we are hearing, while (without thinking about it) we
have expectations about what is going to happen next. You can demonstrate your ability to pre-
dict what is going to happen by listening to a song, preferably instrumental and not too fast. As
you listen, try guessing what notes or phrases are coming next. This is easier for some
compositions, such as ones with repeating themes, but even when there isn’t repeti-
Aphasic participants
tion, what’s coming usually isn’t a surprise. What’s particularly compelling about this
exercise is that it often works even for music you are hearing for the first time. Just as Controls
1.0
our perception of visual scenes we are seeing for the first time is influenced by our past
experiences in perceiving the environment, our perception of music we are hearing for 0.9
the first time can be influenced by our history of listening to music. 0.8
0.7
Performance
0.6
Do Music and Language Overlap in the Brain?
0.5
Patel’s demonstration of similar electrical responses to violations of musical syn-
0.4
tax and language syntax shows that both music and language involve similar
0.3
processes. But we can’t say, based on this finding alone, that music and language
0.2
involve overlapping areas of the brain.
Early research on brain mechanisms of music and language involved study- 0.1
ing patients with brain damage due to stroke. Patel and coworkers (2008) studied 0
Musical Language
a group of stroke patients who had Broca’s aphasia—difficulty in understanding syntax syntax
sentences with complex syntax (see page 39). These patients and a group of controls
were given (1) a language task that involved understanding syntactically complex ➤ Figure 11.19 Performance on language
sentences; and (2) a music task that involved detecting the off-key chords in a se- syntax tasks and musical syntax tasks for
quence of chords. The results of these tests, shown in Figure 11.19, indicate that aphasic participants and control participants.
the patients performed very poorly on the language task compared to the controls (Source: From Patel et al., 2008)
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
(right pair of bars), and that the patients also performed more poorly on the music task (left
pair of bars). Two things that are noteworthy about these results is that (1) there is a connec-
tion between poor performance on the language task and poor performance on the music
task, which suggests a connection between the two; and (2) the deficits in the music task
for aphasia patients were small compared to the deficits in the language tasks. These results
support a connection between brain mechanisms involved in music and language, but not
necessarily a strong connection.
Other neuropsychological studies have provided evidence that different brain mecha-
nisms are involved in music and language. For example, patients who are born having prob-
lems with music perception—a condition called congenital amusia—have severe problems
with tasks such as discriminating between simple melodies or recognizing common tunes.
Yet these individuals often have normal language abilities (Patel, 2013).
Cases have also been observed that demonstrate differences in the opposite direction.
Robert Slevc and coworkers (2016) tested a 64-year-old woman who had Broca’s aphasia
caused by a stroke. She had trouble comprehending complex sentences and had great diffi-
culty putting words together into meaningful thoughts. Yet she was able to detect out-of-
key chords in sequences like those presented by Patel (Figure 11.16a). Neuropsychological
research therefore provides evidence for separate brain mechanisms for music and language
(also see Peretz & Hyde, 2003).
Brain mechanisms have also been studied using neuroimaging. Some of these studies
have shown that different areas are involved in music and language (Fedorenko et al., 2012).
Other studies have shown that music and language activate overlapping areas of the brain.
For example, Broca’s area, which is involved in language syntax, is also activated by music
(Fitch & Martins, 2014; Koelsch, 2005, 2011; Peretz & Zatorre, 2005).
It has also been suggested that even if neuroimaging identifies an area that is activated by
both music and language, this doesn’t necessarily mean that music and language are activating
➤ Figure 11.20 Illustration of the the same neurons within that area. There is evidence that music and language activation can
idea that two different capacities, occur within an area but involve different neural networks (Figure 11.20) (Peretz et al., 2015).
such as language and music, might The conclusion from all of these studies—both behavioral and physiological—is that
activate the same structure in the while there is evidence for the separateness of music and language in the brain, especially from
brain (indicated by the circle), neuropsychological studies, there is also evidence for overlap between music and language,
but when looked at closely, each
mostly from behavioral and brain scanning studies. Thus, it seems that music and language
capacity could activate different
networks (red or black) within
are related, but the overlap isn’t complete, as might be expected when you consider the dif-
the structure. The small circles ference between reading your cognitive psychology textbook and listening to your favorite
represent neurons, and the lines music. Clearly, our knowledge of the relation between music and language is still a “work in
represent connections. progress,” which, as it continues, will add to our understanding of both music and language.
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
CHAPTER SUMMARY
1. Language is a system of communication that uses sounds have biased dominance, some have balanced dominance.
or symbols that enable us to express our feelings, thoughts, The type of dominance, combined with the word’s context,
ideas, and experiences. It is hierarchical and rule-based. influences which meaning is accessed.
2. Modern research in the psychology of language blossomed 9. Syntax is the structure of a sentence. Parsing is the process
in the 1950s and 1960s, with the advent of the cognitive by which words in a sentence are grouped into phrases.
revolution. One of the central events in the cognitive Grouping into phrases is a major determinant of the
revolution was Chomsky’s critique of Skinner’s behavioristic meaning of a sentence. This process has been studied by
analysis of language. using garden path sentences that illustrate the effect of
3. All the words a person knows are his or her lexicon. temporary ambiguity.
Semantics is the meaning of language. 10. Two mechanisms proposed to explain parsing are (1) the
4. The ability to understand words in a sentence is influenced garden path model and (2) the constraint-based approach.
by word frequency. This has been demonstrated using the The garden path model emphasizes how syntactic principles
lexical decision task and by measuring eye movements. such as late closure determine how a sentence is parsed. The
constraint-based approach states that semantics, syntax, and
5. The pronunciation of words is variable, which can make it other factors operate simultaneously to determine parsing.
difficult to perceive words when they are heard out of context. The constraint-based approach is supported by (a) the way
6. There are often no silences between words during words with different meanings affect the interpretation
normal speech, which gives rise to the problem of speech of a sentence, (b) how story context influences parsing,
segmentation. Past experience with words, the word’s (c) how scene context, studied using the visual world
context, statistical properties of language, and knowledge of paradigm, influences parsing, and (d) how the effect of
the meanings of words help solve this problem. memory load and prior experience with language influences
7. Lexical ambiguity refers to the fact that a word can have understandability.
more than one meaning. Tanenhaus used the lexical priming 11. Coherence enables us to understand stories. Coherence
technique to show that (1) multiple meanings of ambiguous is largely determined by inference. Three major types of
words are accessed immediately after they are heard, and (2) inference are anaphoric, instrumental, and causal.
the “correct” meaning for the sentence’s context is identified 12. The situation model approach to text comprehension states
within 200 msec. that people represent the situation in a story in terms of the
8. The relative frequency of the meanings of ambiguous words people, objects, locations, and events that are being described
is described in terms of meaning dominance. Some words in the story.
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
13. Measurements of brain activity have demonstrated how by syntactic coordination—how people’s grammatical
similar areas of the cortex are activated by reading action constructions become coordinated.
words and by actual movements. 18. Music and language are similar in a number of ways.
14. Experiments that measure the ERP response to passages There is a close relation between song and speech, music
show that many things associated with the passage are and language both cause emotion, and both consist of
activated as the passage is being read. organized sequences.
15. Conversations, which involve give-and-take between two 19. There are important differences between music and
or more people, are made easier by procedures that involve language. They create emotions in different ways, and
cooperation between participants in a conversation. These rules for combining tones and words are different. The
procedures include the given–new contract and establishing most important difference is based on the fact that words
common ground. have meanings.
16. Establishing common ground has been studied by analyzing 20. Expectation occurs in both music and language. These
transcripts of conversations. As common ground is parallel effects have been demonstrated by experiments using
established, conversations become more efficient. the ERP to assess the effect of syntactic violations in both
17. The process of creating common ground results in music and language.
entrainment—synchronization between the people in the 21. There is evidence for separateness and overlap of music and
conversation. One demonstration of entrainment is provided language in the brain.
THINK ABOUT IT
1. How do the ideas of coherence and connection apply to the kitchen sink.” Can you think of other examples? If
some of the movies you have seen lately? Have you found you speak a language other than English, can you identify
that some movies are easy to understand whereas others are figures of speech in that language that might be baffling to
more difficult? In the movies that are easy to understand, English-speakers?
does one thing appear to follow from another, whereas in the 4. Newspaper headlines are often good sources of ambiguous
more difficult ones, some things seem to be left out? What phrases. For example, consider the following actual
is the difference in the “mental work” needed to determine headlines: “Milk Drinkers Are Turning to Powder,” “Iraqi
what is going on in these two kinds of movies? (You can also Head Seeks Arms,” “Farm Bill Dies in House,” and “Squad
apply this kind of analysis to books you have read.) Helps Dog Bite Victim.” See if you can find examples of
2. The next time you are able to eavesdrop on a conversation, ambiguous headlines in the newspaper, and try to figure out
notice how the give-and-take among participants follows what it is that makes the headlines ambiguous.
(or does not follow) the given–new contract. Also, notice 5. People often say things in an indirect way, but listeners can
how people change topics and how that affects the flow of often still understand what they mean. See if you can detect
the conversation. Finally, see if you can find any evidence of these indirect statements in normal conversation. (Examples:
syntactic priming. One way to “eavesdrop” is to be part of “Do you want to turn left here?” to mean “I think you
a conversation that includes at least two other people. But should turn left here”; “Is it cold in here?” to mean “Please
don’t forget to say something every so often! close the window.”)
3. One of the interesting things about languages is the use of 6. It is a common observation that people are more irritated
“figures of speech,” which people who know the language by nearby cell phone conversations than by conversations
understand but nonnative speakers often find baffling. between two people who are physically present. Why do you
One example is the sentence “He brought everything but think this occurs? (See Emberson et al., 2010, for one answer.)
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
KEY TERMS
Anaphoric inference, 338 Hierarchical nature of language, 323 Referential communication task, 344
Balanced dominance, 328 Inference, 337 Return to the tonic, 348
Biased dominance, 328 Instrument inference, 338 Rule-based nature of language, 323
Broca’s aphasia, 349 Language, 322 Semantics, 325
Causal inference, 339 Late closure, 332 Situation model, 339
Coherence, 338 Lexical ambiguity, 327 Speech segmentation, 326
Common ground, 343 Lexical decision task, 325 Subject-relative construction, 335
Congenital amusia, 350 Lexical priming, 327 Syntactic coordination, 345
Constraint-based approach to Lexical semantics, 325 Syntactic priming, 345
parsing, 332 Lexicon, 325 Syntax, 331
Emoji, 348 Meaning dominance, 328 Temporary ambiguity, 331
Entrainment, 345 Narrative, 338 Theory of mind, 347
Garden path model of parsing, 332 Object-relative construction, 335 Tonic, 348
Garden path sentence, 331 Parsing, 331 Visual world paradigm, 333
Given–new contract, 342 Prosody, 347 Word frequency effect, 325
Heuristic, 332 Psycholinguistics, 324 Word frequency, 325
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203
People solve problems in many different ways. We will see that sometimes solving a problem
involves hard word and methodological analysis, while other times solutions to problems can
appear to happen in a flash of insight. We will also see that sometimes letting your mind “rest,”
perhaps to wander or daydream, as this woman might be doing as she sits by the canal, can play
an important role in leading to creative solutions to problems.
Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203