Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
16 views135 pages

Chapter 11

Chapter 11 discusses the importance of measurement questions in research, emphasizing that questions must be clear, unbiased, and easy to answer to minimize measurement error. It outlines various scale types (nominal, ordinal, interval, and ratio) and their characteristics, as well as the concepts of reliability, validity, and practicality in measurement. Additionally, it provides guidance on constructing Likert scales and other measurement tools to ensure effective data collection.

Uploaded by

lykhanh151205
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views135 pages

Chapter 11

Chapter 11 discusses the importance of measurement questions in research, emphasizing that questions must be clear, unbiased, and easy to answer to minimize measurement error. It outlines various scale types (nominal, ordinal, interval, and ratio) and their characteristics, as well as the concepts of reliability, validity, and practicality in measurement. Additionally, it provides guidance on constructing Likert scales and other measurement tools to ensure effective data collection.

Uploaded by

lykhanh151205
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 135

Chapter 11 1

S TA G E 3 : M E A S U R E M E N T Q U E S T I O N S

Copyright 2022 © McGraw Hill LLC. All rights reserved. No reproduction or distribution without the prior written consent of McGraw Hill LLC.
11-1
Industry Thought
Leadership

©McGraw Hill Access the text alternative for slide images.


11-2
Industry Thought Leadership
“We increase measurement error when we ask the
wrong questions. The questions we ask need to be
answerable, clear, unbiased, and easy to answer.”

David F. Harris
president, Insight and Measurement
author, The Complete Guide to Writing
Questionnaires: How to Get Better
Information for Better Decisions
©McGraw Hill 11-3
Exhibits

©McGraw Hill 11-4


Exhibit 11-1:
Instrument
Design in
the
Research
Process

©McGraw Hill Access the text alternative for slide images.


11-5
Exhibit 11-2: Instrument Design: Phase 1

©McGraw Hill Access the text alternative for slide images.


11-6
Exhibit 11-3: Summary of Scales by Data
Levels
Scale Type Characteristics Empirical Operations

Nominal Classification (mutually exclusive and • Count (frequency distribution); mode as central
collectively exhaustive categories), but no tendency; No measure of dispersion.
order, distance, or natural origin • Used with other variables to discern patterns,
reveal relationships.
Ordinal Classification and order, but no distance or • Determination of greater or lesser value.
natural origin • Count (frequency distribution); median as
central tendency; nonparametric statistics.
Interval Classification, order, and distance (equal • Determination of equality of intervals or
intervals), but no natural origin differences.
• Count (frequency distribution); mean or median
as measure of central tendency; measure of
dispersion is standard deviation or interquartile
range; parametric tests.
Ratio Classification, order, distance, • Determination of equality of ratios.
and natural origin • Any of the above statistical operations, plus
multiplication and division; mean as central
tendency; coefficients of variation as measure
of dispersion.

©McGraw Hill 11-7


Exhibit 11-4: Dummy Table for American
Eating Habits
Use of Convenience Foods
Age Always Use Use Rarely Use Never Use
Use Frequently Sometimes
18–24
25–34
35–44
45–54
55–64
65+

©McGraw Hill 11-8


Exhibit 11-5:
Measuremen
t Questions:
Select the
Scales

©McGraw Hill Access the text alternative for slide images.


11-9
Exhibit 11-6: Review of Reliability, Validity, and
Practicality in Research
Reliability Reliability is concerned with the degree to which a measurement is free of
random or unstable error. If a question or scale repeatedly measures a property
or object correctly over time and in different conditions, it is reliable. Reliable
questions and scales are stable and equivalent.

Validity Validity is the extent to which a measurement question or scale actually measures what
we wish to measure (our investigative questions). If it has external validity, the data
generated by the question can be generalized across persons, settings, and times. If it
has internal validity, it has content, criterion-related, and construct validity.
• Content validity is the degree that the measurement instrument provides adequate
coverage of the investigative questions.
• Criterion-related validity reflects that the measurement instrument can be used to
predict or estimate some property or object.
• Construct validity reflects that the measurement instrument used in the study for the
purpose used compares favorably to other instruments purporting to measure the
same thing (convergent validity).
Practicality Practicality means that the measurement instrument can be used economically (within
budget), is convenient (easy to administer), and the instrument is interpretable (all
information necessary to interpret the data is known about the instrument and its
process).

©McGraw Hill 11-10


Exhibit 11-7: Characteristics of Scale Types

Characteristic Dichotomous Multiple Checklist Rating Ranking Free


Choice Response
Data level Nominal Nominal, Nominal Ordinal or Ordinal Nominal or ratio
ordinal, or ratio interval*
Usual number of 2 3 to 10 10 or fewer 3 to 7 10 or fewer None
answer
alternatives
provided
Desired number 1 1 10 or fewer 1 per item 7 or fewer 1
of participant
answers
Used to provide . . Classification Classification, Classification 1 per item Order Classification,
. order, or (of idea), order,
specific or specific
numerical numerical
estimate estimate

©McGraw Hill 11-11


Exhibit 11-8: Additional Factors Affecting
Participant Honesty
Syndrome Description Example

Peacock Desire to be perceived as smarter, wealthier, Respondents who claim to shop Harrods in London
happier, or better than others. (twice as many as those who do).
Pleaser Desire to help by providing answers they Respondents give a politically correct or assumed correct
think the researchers want to hear, to please answer about degree to which they revere their elders,
or avoid offending or being socially respect their spouse, etc.
stigmatized.
Gamer Adaption of answers to play the system. Participants who fake membership to a specific
demographic to participate in high remuneration study;
that they drive an expensive car when they don’t or that
they have cancer when they don’t.
Disengager Don’t want to think deeply about a subject. Participants who falsify ad recall or purchase behavior
(didn’t recall or didn’t buy) when they actually did.
Self-delusionist Participants who lie to themselves. Respondents who falsify behavior, such as thelevel they
recycle.
Unconscious Participants who are dominated by irrational Respondents who cannot predict with any certainty their
decision maker decision making. future behavior.
Ignoramus Participant who never knew or doesn’t Respondents who provide false information—such as
remember an answer. they can’t identify on a map where they live or remember
what they ate for supper the previous evening.

©McGraw Hill 11-12


Exhibit 11-9: Sample Measurement Questions
and their Foundation Scales 1

©McGraw Hill Access the text alternative for slide images.


11-13
Exhibit 11-9: Sample Measurement Questions
and their Foundation Scales 2

©McGraw Hill Access the text alternative for slide images.


11-14
Exhibit 11-9: Sample Measurement Questions
and their Foundation Scales 3

©McGraw Hill Access the text alternative for slide images.


11-15
Exhibit 11-9: Sample Measurement Questions
and their Foundation Scales 4

©McGraw Hill Access the text alternative for slide images.


11-16
Exhibit 11-10: How to Build a Likert Scale with
Item Analysis 1

Step 1 Collect a large number of statements that meet the following criteria
• Each statement is relevant to the attitude being studied.
• Each statement reflects a favorable or unfavorable position on that attitude.
Step 2 Select people similar to study participants (participant stand-ins) to read each statement.

Step 3 Participant stand-ins indicate their level of their agreement with each statement, using a 5-point
scale. A scale value of 1 indicates a strongly unfavorable attitude (strongly disagree). A value of 5
indicates a strongly favourable attitude (strongly agree). The other intensities, 2 (disagree), 3
(neither agree nor disagree), 4 (agree), are mid-range attitudes (see Exhibit 11-3).
• To ensure consistent results, the assigned numerical values are reversed if the statement is
worded negatively. The number 1 is always strongly unfavorable and 5 is always strongly
favorable.
Step 4 Add each participant stand-in’s responses to secure a total score.

Step 5 Array these total scores from highest to lowest; then and select some portion—generally defined
as the top and bottom 10 to 25 percent of the distribution—to represent the highest and lowest
total scores.
• The two extreme groups represent people with the most favorable and least favorable
attitudes toward the attitude being studied. These extremes are the two criterion groups by
which individual Likert statements (items) are evaluated.
• Discard the middle group’s scores (50 to 80 percent of participant stand-ins), as they are not
highly discriminatory on the attitude.

©McGraw Hill 11-17


Exhibit 11-10: How to Build a Likert Scale with
Item Analysis 2

Step 6 Calculate the mean scores for each scale item among the low scorers and
high scorers.
Step 7 Test the mean scores for statistical significance by computing a t value for
each statement.
Step 8 Rank order the statements by their t values from highest to lowest.
Step 9 Select 20–25 statements (items) with the highest t values (statistically
significant difference between mean scores) to include in the final question
using the Likert scale.

©McGraw Hill 11-18


For the statement “I would bookmark the site in the ad to use
in the future,” we select the data from the bottom 25 percent of
Exhibit 11- the distribution (low total score group) and the top 25 percent
(high total score group). There are 73 people in each group.
11: The remaining 50 percent of participants are not considered
for this analysis.
Evaluating a
Scale
Statement
by Item
Analysis

©McGraw Hill Access the text alternative for slide images.


11-19
Exhibit 11-12: Results of the Thesaurus Study
Evaluation (E) Potency (P) Activity (A)
Good–Bad Hard–Soft Active–Passive
Positive–Negative Strong–Weak Fast–Slow
Optimistic–Pessimistic Heavy–Light Hot–Cold
Complete–Incomplete Masculine–Feminine Excitable–Calm
Timely–Untimely Severe–Lenient
Tenacious–Yielding

Subcategories of Evaluation
Meek Goodness Dynamic Goodness Dependable Goodness Hedonistic Goodness
Clean–Dirty Successful–Unsuccessful True–False Pleasurable–Painful
Kind–Cruel High–Low Reputable–Disreputable Beautiful–Ugly
Sociable–Unsociable Meaningful–Meaningless Believing–Skeptical Sociable–Unsociable
Light–Dark Important–Unimportant Wise–Foolish Meaningful–Meaningless
Altruistic–Egotistical Progressive–Regressive Healthy–Sick
Grateful–Ungrateful Clean–Dirty
Beautiful–Ugly
Harmonious–Dissonant
©McGraw Hill 11-20
Exhibit 11-13: Adapting SD Scales for Retail
Store Image Study

©McGraw Hill Access the text alternative for slide images.


11-21
Exhibit 11-14: How to Construct an SD Scale 1

Step 1 Select the variable; chosen by judgment and reflects the nature of the investigative
question.
Step 2 Identify possible nouns, noun phrases, adjectives, or visual stimuli to represent the variable.
Step 3 Select bipolar word pairs, phrase pairs, or visual pairs appropriate to assess the object or
property. If the traditional Osgood adjectives are used, several criteria guide your selection:
• Choose adjectives that allow connotative perceptions to be expressed.
• Choose three bipolar pairs for each dimension: evaluation, potency, and activity. (Scores on
the individual items can be averaged, by factor, to improve reliability.)
• Choose pairs that will be stable across participants and variables. (One pair that fails this
test is “large–small”; may describe a property when judging a physical object such as
automobile but may be used connotatively with abstract concepts such as product quality.)
• Choose pairs that are linear between polar opposites and passes through the origin. (A pair
that fails this test is “rugged–delicate,” which is nonlinear as both objectives have favorable
meanings.)
Step 4 Create the scoring system and assign a positive value to each point on the scale. (Most SD
scales have
7 points with values of 7, 6, 5, 4, 3, 2, and 1. A “0” point is arbitrary.)
Step 5 Randomly select half the pairs and reverse score them to minimize the halo effect.
Step 6 Order the bipolar pairs so all representing a single dimension (e.g. evaluation) are not together
in the final measurement question.

©McGraw Hill 11-22


Exhibit 11-15: SD Scale for Analyzing Industry
Association Candidates 1

©McGraw Hill Access the text alternative for slide images.


11-23
Exhibit 11-
16: Graphic
Representa
tion of SD
Analysis 1

©McGraw Hill Access the text alternative for slide images.


11-24
Exhibit 11-17: Sample Ranking Scales as
Measurement Questions

©McGraw Hill Access the text alternative for slide images.


11-25
Exhibit 11-18: Response Patterns of 200 Heavy Users’
Paired Comparisons on Five Alternative Package Designs

Paired-comparison data may be treated in several ways. If there is substantial consistency, we will find that
if A is preferred to B, and B to C, then A will be consistently preferred to C. This condition of transitivity need
not always be true but should occur most of the time. When it does, take the total number of preferences
among the comparisons as the score for that stimulus. Assume a manager is considering five distinct
packaging designs. She would like to know how heavy users would rank these designs. One option would
be to ask a sample of the heavy-users segment to pair-compare the packaging designs. With a rough
comparison of the total preferences for each option, it is apparent that B is the most popular.

Designs
A B C D E
A — 164* 138 50 70
B 36 — 54 14 30
C 62 146 — 32 50
D 150 186 168 — 118
E 130 170 150 82 —
Total 378 666 510 178 268
Rank order 3 1 2 2 4

* Interpret this cell as 164 of 200 customers preferred suggested design B (column) to design A (row).
©McGraw Hill 11-26
Exhibit 11-19: Ideal Scalogram Response
Pattern
Item 2 Item 4 Item 1 Item 3 Participant Score
X X X X 4
— X X X 3
— — — X 2
— — — X 1
— — — — 0

* X = agree; — = disagree.
Case Item 2 Item 4 Item 1 Item 3 Participant Score
1 X X X X 4

2 — X X X 3

3 — — X X 2

4 — — — X 1

5 — — — — 0

©McGraw Hill 11-27


Exhibit 11-20: Summary of Issues Related to
Measurement Questions 1

Issue Category Fundamental Issue


Question Content
1. Purposeful versus Does the question ask for data that will be merely interesting or truly useful to the manager in
interesting making a decision?
2. Incomplete or Will the question reveal what the manager needs to know?
unfocused
3. Double-barreled Does the question ask the participant for too much information? Would the desired single
questions response be accurate for all parts of the question?
4. Precision Does the question ask precisely what the manager needs to know?
5. Time for thought Is it reasonable to assume that the participant can frame an answer to the question?
6. Participation at the Does the question pressure the participant for a response regardless of knowledge or
expense of accuracy experience?
7. Presumed Does the question assume the participant has knowledge he or she may not have?
knowledge
8. Recall and memory Does the question ask the participant for information that relates to thoughts or activity
decay too far in the participant’s past to be remembered?
9. Balance (general vs. Does the question ask the participant to generalize or summarize behavior that may have
specific) no discernable pattern?
10. Objectivity Does the question omit or include information that will bias the participant’s response?
11. Sensitive information Does the question ask the participant to reveal embarrassing, shameful, or ego-related
information?

©McGraw Hill 11-28


Exhibit 11-20: Summary of Issues Related to
Measurement Questions 2

Issue Category Fundamental Issue


Question Wording
12. Shared Does the question use words that have no meaning or a different
vocabulary meaning for the participant?
13. Unsupported Does the question assume a prior experience, a precondition, or prior
assumption knowledge that the participant does not or may not have?
14. Frame of Is the question worded from the participant’s, rather than the
reference researcher’s, perspective?
15. Biased wording Does the question contain wording that implies the researcher’s desire
for the participant to respond in one way versus another?
16. Personalization Is it necessary for the participant to reveal personal attitudes and
vs. projection behavior, or may the participant project these attitudes and behaviors
to someone like him or her?
17. Adequate Does the question provide a mutually exhaustive list of alternatives to
alternatives encompass realistic or likely participant attitudes and behaviors?

©McGraw Hill 11-29


Exhibit 11-20: Summary of Issues Related to
Measurement Questions 3

Issue Category Fundamental Issue


Response
Strategy Choice
18. Objective of the Is the question designed to classify or label attitudes, conditions, and
study behaviors or to reveal them?
19. Level of Does the participant possess the level of information appropriate for
information participation in the study?
20. Thoroughness Has the participant developed an attitude on the issue being asked?
of prior thought
21. Communication Does the participant have sufficient command of the language to
skill answer the question?
22. Participant Is the level of motivation sufficient to encourage the participant to give
motivation thoughtful, revealing answers?
23. Method of Is the question appropriate for the participant’s method of accessing
access the instrument?

©McGraw Hill 11-30


Text Images

©McGraw Hill 11-31


Text Images 1

©McGraw Hill 11-32


Text Images 2

©McGraw Hill 11-33


Galaxy…Teen Shopping

Study on teen shopping

Compete with specialty


stores

Intra-store boutiques?
What scale?

©McGraw Hill 11-34


Snapshots,
CloseUps, &
PicProfiles

©McGraw Hill 11-35


Snapshot: Toluna and Voss Measure Water 1

Healthy lifestyle choices

Drink more water

More flavors needed?

©McGraw Hill 11-36


Snapshot: Toluna and Voss Measure Water 2

Assess 2 new flavors

Draw from panel > 9


million
Sample = 1,894 in 6
hours

Mobile optimized
research

©McGraw Hill 11-37


Snapshot: Toluna and Voss Measure Water 3

When thinking of water, is


the lack of flavor a turn-off
for you?

What types of food do you


like to eat while you have a
cocktail? Please select one.

What flavors do you


associate with
spring/summer time
beverages and cocktails?
Please select all that apply.

©McGraw Hill 11-38


Snapshot: Maritz Discovers Better Way to
Measure Customer Satisfaction
New non-compensatory
model

Scale overall satisfaction


first
Then scale individual
attributes

What attribute made


experience so good/bad
as to offset all others?

©McGraw Hill 11-39


Snapshot: Paired Comparison Increases
Hospitality

“We now estimate that


Americans with disabilities
currently spent $13.2 billion
in travel expenditures and
that amount would at least
double [to $27.2 billion] if
travel businesses were more
attuned to the needs of those
with disabilities.”

©McGraw Hill 11-40


Snapshot: The Survey Monkey Question Bank

©McGraw Hill 11-41


CloseUp: Developing a Powerful Prediction
Metric, CIIM 1

PRE-CIIM: Management
Dilemma
Ads crafted: used cultural
perceptions of designers.

Drawn from unsupported


stereotypes.

Not viewed as culturally


relevant.

Ads ignored or rejected.

©McGraw Hill 11-42


CloseUp: Developing a Powerful Prediction
Metric, CIIM 2

Can an ad with a true


cultural connection
deliver true incremental
value?

©McGraw Hill 11-43


CloseUp: Developing a Powerful Prediction
Metric, CIIM 3

Literature search: Constructs


of culture

Literature search: Noted


authorities

Identify cultural segments

Solicit ad submissions

Select ads for testing

Design Survey: less than 20


min., 20 questions.

Activate survey.

©McGraw Hill 11-44


CloseUp: Developing a Powerful Prediction
Metric, CIIM 4

Cultural Pillars

Cultural Acknowledgement

Cultural Respect

Positive Reflections

Cultural Value

Authentic Portrayal

Celebrated Culture:

Cultural Pride

©McGraw Hill 11-45


CloseUp: Developing a Powerful Prediction
Metric, CIIM 5

Culturally relevant ads:

1.5x more likely to learn more


about the brand

2.8x more likely to


recommend the brand

2.6x more likely to find the


brand relevant to them

3x more likely to find the ad


relevant to them

2.7x more likely to purchase


the brand first time

50% more likely to repurchase


a brand bought before

©McGraw Hill 11-46


PicProfile: Diversity & Inclusiveness 1

©McGraw Hill Access the text alternative for slide images.


11-47
PicProfile: Diversity & Inclusiveness 2

If the inclusiveness is perceived to be authentic to that product, viewers


are more likely to take desired actions.”

Consider product/service Bought or planned to buy

Look for info

Ask friends and family about


product/service
Look for ratings/reviews
Visit company social media sites

Compare pricing

Visit website
Visit store
©McGraw Hill 11-48
PicProfile: Gen Z

©McGraw Hill Access the text alternative for slide images.


11-49
PulsePoint

©McGraw Hill 11-50


PulsePoint

The percent of workers who


are considered truly loyal.

34

©McGraw Hill 11-51


From the
Headlines
Discussions

©McGraw Hill 11-52


From the Headlines
All the new media streaming services have had a negative
effect on movie ticket sales. To make it even worse, some
streaming competition is coming from the same firms that
provide theaters with movies. Those first-run firms are now
going directly into streaming. Assume that AMC is going to
join the fray with a streaming service of its own. AMC wants to
determine if independent movie creators would be interested
in licensing their films for streaming at the same time as that
film is showing in the AMC theater.
Craft the measurement question for an email survey.

©McGraw Hill 11-53


Video Discussion

©McGraw Hill 11-54


Case Discussions

©McGraw Hill 11-55


Learning
Objectives

©McGraw Hill 11-56


Learning Objectives
Understand . . .
• The role of a preliminary analysis plan in developing
measurement questions.
• The critical decisions involved in selecting an appropriate
measurement scale for a measurement question.
• The characteristics of various questions based on their
foundational scales: rating, ranking, and sorting.
• The factors that influence a specific measurement question.

©McGraw Hill 11-57


Chapter Outline

©McGraw Hill 11-58


Research Thought Leader
“We increase measurement error when we ask the wrong
questions. The questions we ask need to be answerable,
clear, unbiased, and easy to answer.”

David F. Harris
president, Insight and Measurement
author, The Complete Guide to Writing Questionnaires: How
to Get Better Information for Better Decisions

©McGraw Hill 11-59


Relationship of Questions to Scales

©McGraw Hill Access the text alternative for slide images.


11-60
Instrument
Design in
the
Research
Process

©McGraw Hill Access the text alternative for slide images.


11-61
Instrument Design: Phase 1 1

©McGraw Hill Access the text alternative for slide images.


11-62
Instrument Design: Phase 1 2

©McGraw Hill Access the text alternative for slide images.


11-63
Instrument Design: Phase 1 3

©McGraw Hill Access the text alternative for slide images.


11-64
Communication Approach

©McGraw Hill 11-65


Instrument Design: Phase 1 4

©McGraw Hill Access the text alternative for slide images.


11-66
Instrument Structure Issues

Free-response
Question Structure
Structured

Objective of the Question

Thoroughness of Prior Thought

Communication Skill

Participant Motivation

©McGraw Hill 11-67


Factors Affecting Concealment of Purpose &
Sponsor
Purpose Concealment Disguised Questions
Undisguised Questions

Sponsor Concealment Disguised Questions


Undisguised Questions

Types of information
• Willingly Shared, Conscious-level.
• Reluctantly shared, conscious-level.
• Knowable, limited-conscious-level.
• Subconscious-level.

©McGraw Hill 11-68


Instrument Design: Phase 1 5

©McGraw Hill Access the text alternative for slide images.


11-69
Preliminary Analysis Plan: Dummy Tables

Use of Convenience Foods


Age Always Use Use Rarely Use Never Use
Use Frequently Sometimes
18–24
25–34
35–44
45–54
55–64
65+

©McGraw Hill 11-70


Instrument Design: Phase 1 6

©McGraw Hill Access the text alternative for slide images.


11-71
Summary of Scales by Data Levels
Scale Type Characteristics Empirical Operations

Nominal Classification (mutually exclusive and • Count (frequency distribution); mode as central
collectively exhaustive categories), but no tendency; No measure of dispersion.
order, distance, or natural origin • Used with other variables to discern patterns,
reveal relationships.
Ordinal Classification and order, but no distance or • Determination of greater or lesser value
natural origin • Count (frequency distribution); median as
central tendency; nonparametric statistics.
Interval Classification, order, and distance (equal • Determination of equality of intervals or
intervals), but no natural origin differences.
• Count (frequency distribution); mean or median
as measure of central tendency; measure of
dispersion is standard deviation or interquartile
range; parametric tests.
Ratio Classification, order, distance, • Determination of equality of ratios.
and natural origin • Any of the above statistical operations, plus
multiplication and division; mean as central
tendency; coefficients of variation as measure
of dispersion.

©McGraw Hill 11-72


Measurement
Questions:
Select the
Scales

©McGraw Hill Access the text alternative for slide images.


11-73
Factors Affecting Measurement Scale
Selection
Research objectives Response types

Number of dimensions Balanced or unbalanced


Forced or unforced Number of scale points
choices
Rater errors

©McGraw Hill 11-74


Research Objectives

Measure characteristics
of participants

Use participants as
judges

©McGraw Hill 11-75


Levels of Measurement 4

Response Types

Ranking Questions

Categorization Questions

Sorting Questions

©McGraw Hill 11-76


Number of Dimensions

Unidimensional

Multidimensional

©McGraw Hill 11-77


Balanced or Unbalanced

How good an actress is Scarlett Johansson?

• Very bad. • Poor.


• Bad. • Fair.
• Neither good nor • Good.
bad. • Very good.
• Good. • Excellent.
• Very good.

©McGraw Hill 11-78


Forced or Unforced Choices

How good an actress is Scarlett Johansson?

• Very bad. • Very bad.


• Bad. • Bad.
• Neither good nor • Neither good nor bad.
bad. • Good.
• Good. • Very good.
• Very good. • No opinion.
• Don’t know.

©McGraw Hill 11-79


Number of Scale Points

How good an actress is Scarlett Johansson?

• Very bad. • Very bad.


• Bad. • Somewhat bad.
• Neither good nor • A little bad.
bad. • Neither good nor bad.
• Good. • A little good.
• Very good. • Somewhat good.
• Very good.

©McGraw Hill 11-80


Reduce Rater Errors 1

• Adjust strength of
descriptive adjectives.
• Space intermediate
• Error of Central descriptive phrases
Tendency. farther apart.
• Error of Leniency. • Provide smaller
• Error of Strictness. differences in meaning
between terms near the
ends of the scale.
• Use more scale points.

©McGraw Hill 11-81


Reduce Rater Errors 2

Reverse order of
• Primacy Effect alternatives periodically
• Recency Effect or randomly

©McGraw Hill 11-82


Reduce Rater Errors 3

• Rate one trait at a


time.
Halo Effect
• Reveal one trait per
page.
• Reverse anchors
periodically.

©McGraw Hill 11-83


Additional Factors Affecting Participant
Honesty 1

©McGraw Hill Access the text alternative for slide images.


11-84
Additional Factors Affecting Participant
Honesty 2

Syndrome Description Example

Peacock Desire to be perceived as smarter, wealthier, Respondents who claim to shop Harrods in London
happier, or better than others. (twice as many as those who do).
Pleaser Desire to help by providing answers they Respondents give a politically correct or assumed correct
think the researchers want to hear, to please answer about degree to which they revere their elders,
or avoid offending or being socially respect their spouse, etc.
stigmatized.
Gamer Adaption of answers to play the system. Participants who fake membership to a specific
demographic to participate in high remuneration study;
that they drive an expensive car when they don’t or that
they have cancer when they don’t.
Disengager Don’t want to think deeply about a subject. Participants who falsify ad recall or purchase behavior
(didn’t recall or didn’t buy) when they actually did.
Self-delusionist Participants who lie to themselves. Respondents who falsify behavior, such as thelevel they
recycle.
Unconscious Participants who are dominated by irrational Respondents who cannot predict with any certainty their
decision maker decision making. future behavior.
Ignoramus Participant who never knew or doesn’t Respondents who provide false information—such as
remember an answer. they can’t identify on a map where they live or remember
what they ate for supper the previous evening.

©McGraw Hill 11-85


Nature of Attitudes
Cognitive I think oatmeal is healthier than corn flakes for breakfast.

Affective I hate oatmeal for breakfast.

Behavioral I intend to eat more oatmeal for breakfast.

©McGraw Hill 11-86


Predicting Behavior from Attitudes

©McGraw Hill 11-87


Response Types 1

Rating questions

Ranking Questions

Categorization Questions

Sorting Questions

©McGraw Hill 11-88


Dichotomous-Choice Question
My spouse and I
plan to purchase a
home in the next 12
months.
• Yes.
• No.

©McGraw Hill 11-89


Multiple-Choice, Single-Response Question

What newspaper do
you read most often for
financial news?
• East City Gazette.
• West City Tribune.
• Regional newspaper
• National newspaper.
• Other
(specify:_________).

©McGraw Hill 11-90


Multiple-Choice, Multiple-Response Question

Check any of the


sources you consulted
when designing your
new home.
• Online planning
services.
• Magazines.
• Independent
contractor/builder.
• Designer.
• Architect.
• Other
(specify:_______).
©McGraw Hill 11-91
Likert Scale-based Question
The Internet is superior
to traditional libraries
for comprehensive
searches.
• Strongly Disagree.
• Disagree.
• Neither Agree nor
Disagree.
• Agree.
• Strongly Agree.

©McGraw Hill 11-92


How to Build a Likert Scale with Item Analysis

Collect Statements

Select Participant Stand-ins

Test Statements with Stand-ins


Add Participant Stand-in’s Total Score

Array Scores from Highest to Lowest

Calculate a mean score per statement

Test mean scores for statistical significance

Rank order statements by t value


Select 20 to 25 statements with highest t value

©McGraw Hill 11-93


How to Built a Likert Scale with Item Analysis 1

Step 1 Collect a large number of statements that meet the following criteria
• Each statement is relevant to the attitude being studied.
• Each statement reflects a favorable or unfavorable position on that attitude.
Step 2 Select people similar to study participants (participant stand-ins) to read each statement.

Step 3 Participant stand-ins indicate their level of their agreement with each statement, using a 5-point
scale. A scale value of 1 indicates a strongly unfavorable attitude (strongly disagree). A value of 5
indicates a strongly favourable attitude (strongly agree). The other intensities, 2 (disagree), 3
(neither agree nor disagree), 4 (agree), are mid-range attitudes (see Exhibit 11-3).
• To ensure consistent results, the assigned numerical values are reversed if the statement is
worded negatively. The number 1 is always strongly unfavorable and 5 is always strongly
favorable.
Step 4 Add each participant stand-in’s responses to secure a total score.

Step 5 Array these total scores from highest to lowest; then and select some portion—generally defined
as the top and bottom 10 to 25 percent of the distribution—to represent the highest and lowest
total scores.
• The two extreme groups represent people with the most favorable and least favorable
attitudes toward the attitude being studied. These extremes are the two criterion groups by
which individual Likert statements (items) are evaluated.
• Discard the middle group’s scores (50 to 80 percent of participant stand-ins), as they are not
highly discriminatory on the attitude.

©McGraw Hill 11-94


How to Built a Likert Scale with Item Analysis 2

Step 6 Calculate the mean scores for each scale item among the low scorers and
high scorers.
Step 7 Test the mean scores for statistical significance by computing a t value for
each statement.
Step 8 Rank order the statements by their t values from highest to lowest.
Step 9 Select 20–25 statements (items) with the highest t values (statistically
significant difference between mean scores) to include in the final question
using the Likert scale.

©McGraw Hill 11-95


For the statement “I would bookmark the site in the ad to use
in the future,” we select the data from the bottom 25 percent of
Evaluating a the distribution (low total score group) and the top 25 percent
Scale (high total score group). There are 73 people in each group.
The remaining 50 percent of participants are not considered
Statement for this analysis.

by Item
Analysis

©McGraw Hill Access the text alternative for slide images.


11-96
Semantic Differential-based Question

©McGraw Hill 11-97


How to Construct an SD Question

Select the variable

Identify nouns, noun phrases, adjectives,


visual stimuli that represent the variable

Select bipolar pairs, phrase pairs, or Evaluation


visual pairs (3 for each dimension)
Potency

Create Scoring System Activity

Order the Bipolar Pairs Half: Positive left

Half: Positive right


No pairs for one dimension together
©McGraw Hill 11-98
Exhibit 11-14: How to Construct an SD Scale 2

Step 1 Select the variable; chosen by judgment and reflects the nature of the investigative question.
Step 2 Identify possible nouns, noun phrases, adjectives, or visual stimuli to represent the variable.
Step 3 Select bipolar word pairs, phrase pairs, or visual pairs appropriate to assess the object or
property. If the traditional Osgood adjectives are used, several criteria guide your selection:
• Choose adjectives that allow connotative perceptions to be expressed.
• Choose three bipolar pairs for each dimension: evaluation, potency, and activity. (Scores on
the individual items can be averaged, by factor, to improve reliability.)
• Choose pairs that will be stable across participants and variables. (One pair that fails this
test is “large–small”; may describe a property when judging a physical object such as
automobile but may be used connotatively with abstract concepts such as product quality.)
• Choose pairs that are linear between polar opposites and passes through the origin. (A pair
that fails this test is “rugged–delicate,” which is nonlinear as both objectives have favorable
meanings.)
Step 4 Create the scoring system and assign a positive value to each point on the scale. (Most SD
scales have
7 points with values of 7, 6, 5, 4, 3, 2, and 1. A “0” point is arbitrary.)
Step 5 Randomly select half the pairs and reverse score them to minimize the halo effect.
Step 6 Order the bipolar pairs so all representing a single dimension (e.g. evaluation) are not together
in the final measurement question.

©McGraw Hill 11-99


Adapting SD Scales

©McGraw Hill Access the text alternative for slide images.


11-100
Exhibit 11-15: SD Scale for Analyzing Industry
Association Candidates 2

©McGraw Hill Access the text alternative for slide images.


11-101
Exhibit 11-
16: Graphic
Representa
tion of SD
Analysis 2

©McGraw Hill Access the text alternative for slide images.


11-102
Numerical Rating Question

©McGraw Hill Access the text alternative for slide images.


11-103
Multiple Rating List Question

©McGraw Hill Access the text alternative for slide images.


11-104
Stapel Scale-based Question

©McGraw Hill Access the text alternative for slide images.


11-105
Constant-Sum Question

©McGraw Hill Access the text alternative for slide images.


11-106
Verbal Graphic Rating Question

©McGraw Hill Access the text alternative for slide images.


11-107
Visual Graphic Rating Question

©McGraw Hill Access the text alternative for slide images.


11-108
Characteristics of Scale Types
Characterist Dichotomou Multiple Checklist Rating Ranking Free
ic s Choice Response
Data level Nominal Nominal, Nominal Ordinal or Ordinal Nominal or
ordinal, or interval* ratio
ratio
Usual 2 3 to 10 10 or fewer 3 to 7 10 or fewer None
number of
answer
alternatives
provided
Desired 1 1 10 or fewer 1 per item 7 or fewer 1
number of
participant
answers
Used to Classification Classification Classification 1 per item Order Classification
provide . . . , order, or , (of idea),
specific order, or
numerical specific
estimate numerical
estimate

©McGraw Hill 11-109


Response Types 2

Rating questions

Ranking Questions

Categorization Questions

Sorting Questions

©McGraw Hill 11-110


Ranking Questions

Paired-comparison

Forced ranking

Comparative

©McGraw Hill 11-111


Paired-Comparison Question

©McGraw Hill Access the text alternative for slide images.


11-112
Paired Comparison Response Patterns
Paired-comparison data may be treated in several ways. If there is substantial consistency, we will find that
if A is preferred to B, and B to C, then A will be consistently preferred to C. This condition of transitivity need
not always be true but should occur most of the time. When it does, take the total number of preferences
among the comparisons as the score for that stimulus. Assume a manager is considering five distinct
packaging designs. She would like to know how heavy users would rank these designs. One option would
be to ask a sample of the heavy-users segment to pair-compare the packaging designs. With a rough
comparison of the total preferences for each option, it is apparent that B is the most popular.

Designs
A B C D E
A — 164* 138 50 70
B 36 — 54 14 30
C 62 146 — 32 50
D 150 186 168 — 118
E 130 170 150 82 —
Total 378 666 510 178 268
Rank order 3 1 2 2 4

* Interpret this cell as 164 of 200 customers preferred suggested design B (column) to design A (row).
©McGraw Hill 11-113
Forced Ranking Question

©McGraw Hill Access the text alternative for slide images.


11-114
Comparative Questions

©McGraw Hill Access the text alternative for slide images.


11-115
Ideal Scalogram Pattern
Item 2 Item 4 Item 1 Item 3 Participant Score
X X X X 4
— X X X 3
— — — X 2
— — — X 1
— — — — 0

* X = agree; — = disagree.

©McGraw Hill 11-116


Response Types 3

Rating questions

Ranking Questions

Categorization Questions

Sorting Questions

©McGraw Hill 11-117


Response Types 4

Rating questions

Ranking Questions

Categorization Questions

Sorting Questions

©McGraw Hill 11-118


Sorting Questions

Select descriptors >60

Create cards, shuffle <120

Between 7-11
Sort cards into piles Structured sort
Unstructured Sort

Left = most favorable


Arrange Piles
Right = least favorable

©McGraw Hill 11-119


Find or Craft Measurement Questions 1

Question Coverage

Question wording

Question Frame of
Reference

Response
Alternatives

©McGraw Hill 11-120


Find or Craft Measurement Questions 2

Question Coverage

Question wording

Question Frame of Reference

Response Alternatives

Personalization
Shared vocabulary
Leading questions
Double-barreled questions
Unsupported assumptions
©McGraw Hill 11-121
Find or Craft Measurement Questions 3

Question Coverage

Question wording
Question Frame of Reference

Response Alternatives

Role
Behavior time frame
Behavior cycle

Behavior frequency
Memory Decay
©McGraw Hill 11-122
Find or Craft Measurement Questions 4

Question Coverage

Question wording
Question Frame of Reference

Response Alternatives

Free-response
Structured
90% of responses
Recency Effect
Primacy Effect
Central Tendency
©McGraw Hill 11-123
Exhibit 11-20: Summary of Issues Related to
Measurement Questions 4

Issue Category Fundamental Issue


Question Content
1. Purposeful versus Does the question ask for data that will be merely interesting or truly useful to the manager in
interesting making a decision?
2. Incomplete or Will the question reveal what the manager needs to know?
unfocused
3. Double-barreled Does the question ask the participant for too much information? Would the desired single
questions response be accurate for all parts of the question?
4. Precision Does the question ask precisely what the manager needs to know?
5. Time for thought Is it reasonable to assume that the participant can frame an answer to the question?
6. Participation at the Does the question pressure the participant for a response regardless of knowledge or
expense of accuracy experience?
7. Presumed Does the question assume the participant has knowledge he or she may not have?
knowledge
8. Recall and memory Does the question ask the participant for information that relates to thoughts or activity
decay too far in the participant’s past to be remembered?
9. Balance (general vs. Does the question ask the participant to generalize or summarize behavior that may have
specific) no discernable pattern?
10. Objectivity Does the question omit or include information that will bias the participant’s response?
11. Sensitive information Does the question ask the participant to reveal embarrassing, shameful, or ego-related
information?

©McGraw Hill 11-124


Exhibit 11-20: Summary of Issues Related to
Measurement Questions 5

Issue Category Fundamental Issue


Question Wording
12. Shared Does the question use words that have no meaning or a different
vocabulary meaning for the participant?
13. Unsupported Does the question assume a prior experience, a precondition, or prior
assumption knowledge that the participant does not or may not have?
14. Frame of Is the question worded from the participant’s, rather than the
reference researcher’s, perspective?
15. Biased wording Does the question contain wording that implies the researcher’s desire
for the participant to respond in one way versus another?
16. Personalization Is it necessary for the participant to reveal personal attitudes and
vs. projection behavior, or may the participant project these attitudes and behaviors
to someone like him or her?
17. Adequate Does the question provide a mutually exhaustive list of alternatives to
alternatives encompass realistic or likely participant attitudes and behaviors?

©McGraw Hill 11-125


Exhibit 11-20: Summary of Issues Related to
Measurement Questions 6

Issue Category Fundamental Issue


Response
Strategy Choice
18. Objective of the Is the question designed to classify or label attitudes, conditions, and
study behaviors or to reveal them?
19. Level of Does the participant possess the level of information appropriate for
information participation in the study?
20. Thoroughness Has the participant developed an attitude on the issue being asked?
of prior thought
21. Communication Does the participant have sufficient command of the language to
skill answer the question?
22. Participant Is the level of motivation sufficient to encourage the participant to give
motivation thoughtful, revealing answers?
23. Method of Is the question appropriate for the participant’s method of accessing
access the instrument?

©McGraw Hill 11-126


Instrument Design: Phase 1 7

©McGraw Hill Access the text alternative for slide images.


11-127
Pretest Measurement Questions

Participant Research
Surrogates Colleagues

Unanswerable questions

Difficult-to-answer Questions

Inappropriate Rating scales

Questions missing frame of reference

Questions that need instructions

Questions that need operational


definitions
©McGraw Hill 11-128
Chapter 11 2

S TA G E 3 : M E A S U R E M E N T Q U E S T I O N S

Copyright 2022 © McGraw Hill LLC. All rights reserved. No reproduction or distribution without the prior written consent of McGraw Hill LLC.
11-129
Key Terms 1

• Acquiescence bias. • Dichotomous question.


• Attitude. • Disguised question.
• Attitude scaling. • Double-barreled question.
• Balanced rating scale. • Dummy table.
• Behavior time frame. • Error of central tendency.
• Behavior cycle. • Error of leniency.
• Behavior frequency. • Error of strictness.
• Categorization. • Forced-choice questions.
• Checklist question. • Forced-choice rating scale.
• Comparative scale. • Forced rank question.
• Constant-sum scale. • Forced ranking scale.
• Cumulative question. • Frame of reference.
• Cumulative scale. • Free-response question.
©McGraw Hill 11-130
Key Terms 2

• Graphic rating scale. • Multiple choice question.


• Halo effect (error). • Multiple rating list scale.
• Hypothetical construct. • Numerical scale.
• Leading question. • Paired-comparison scale.
• Likert scale. • Pretesting.
• Measurement instrument. • Preliminary analysis plan.
• Measurement question. • Primacy effect.
• Measurement scale. • Q-sort.
• Memory scale. • Ranking scale.
• Multidimensional scale. • Rating scale.
• Multiple-choice, multiple-response • Rating question.
scale. • Recency effect.
• Multiple-choice, single-response • Scaling.
scale.

©McGraw Hill 11-131


Key Terms 3

• Scalogram analysis. • Summated rating scale.


• Semantic differential. • Unbalanced rating scale.
• Simple category scale. • Unforced-choice questions.
• Sorting. • Unforced-choice rating scale.
• Stapel scale. • Unidimensional scale.
• Summated rating scale. • Unsupported assumption
• Structured response. question.
• Structured question. • Visual graphic rating scale.

©McGraw Hill 11-132


Photo Attributions 1

Slide Image Attribution


31 ©Africa Studio/Shutterstock
32 ©Hurst Photo/Shutterstock.com
35 ©Shutterstock/Maridav
36 © Shutterstock/Maridav
37 © Shutterstock/Maridav
38 © Nasi Sakura/Purestock/Superstock
39 © Creatas/age footstock
40 ©Pamela Schindler/McGraw-Hill
(developed from a SurveyMonkey template window and incorporating an image from
©monkeybusinessimages/Getty Images)
41 © Shutterstock / CREATISTA
42 © Shutterstock / CREATISTA
43 © Shutterstock / CREATISTA
44 © Shutterstock / CREATISTA
45 © Shutterstock / CREATISTA
46 © Pamela Schindler/McGraw-Hill
47 © pixelheadphoto digitalskillet/Shutterstock
74 (1st) ©View Apart/Shutterstock
74 (2nd) ©Bjorn Heller/Getty Images
75 ©Chad McDermott / Alamy
76 ©Ken Welsh/age fotostock
85 ©John A. Rizzo/Getty Images
87 ©iQoncept/Shutterstock
88 ©Ryan McVay/Getty Images

©McGraw Hill 11-133


Photo Attributions 2

Slide Image Attribution


91 ©Shutterstock / Dragon Images
95 © Shutterstock / Andrey Popov
101 © Chris Ryan / age fotostock
102 ©Juice Images / Alamy
103 © travelbetter.co.uk / Alamy Stock Photo
104 ©DreamPictures/Shannon Faulk/Blend Images LLC
105 ©Jerry Ballard/Alamy
106 ©Jerry Ballard/Alamy
109 ©Image Source Trading Ltd/Shutterstock
110 ©Getty Images/iStockphoto
112 ©Jacom Stephens/Getty Images
113 ©Getty Images/Blend Images
115 ©Courtesy of FocusVision, Inc.
116 ©Pamela Schindler/McGraw-Hill
117 ©Pamela Schindler/McGraw-Hill
118 ©tashka2000/Getty Images
119 ©tashka2000/Getty Images
120 ©tashka2000/Getty Images
121 ©tashka2000/Getty Images

©McGraw Hill 11-134


Chapter 11 3

S TA G E 3 : M E A S U R E M E N T Q U E S T I O N S

Copyright 2022 © McGraw Hill LLC. All rights reserved. No reproduction or distribution without the prior written consent of McGraw Hill LLC.
11-135

You might also like