Lesson 4
RESEARCH
INSTRUMENT,
VALIDITY, AND
RELIABILITY
• Identify the instruments that
researchers use in a quantitative
Learning
study.
Objectives:
• Create a valid and reliable
At the end of instrument for a quantitative
the lesson, you
should be able study.
to do the • Assess the instrument’s validity
following:
and reliability to provide a good
quantitative data.
Research
Instrument,
Validity, and
Reliability
Quantitative Research Instrument
An instrument can be defined as a tool
such as a questionnaire or a survey that
measures specific items to gather
quantitative data.
Quantitative Research Instrument
Research Instruments are basic tools
researchers use to gather data for
specific research problems.
Types of Research Instrument
Demographic Forms
• used to collect basic information such
as age, gender, ethnicity, and annual
income.
Example of a Demographic Form
1. Age: ___
2. Gender:
____ Male
____ Female
____ Prefer not to say
1. Civil Status:
____ Single
____ Married
____ Widowed
1. Nationality: _____________________
Types of Research Instrument
Performance Measures
• used to assess or rate an individual’s
ability such as achievement,
intelligence, aptitude, or interests.
Types of Research Instrument
Attitudinal Measures
• instruments used to measure an
individual’s attitudes and opinions
about a subject.
Types of Research Instrument
Behavioral Observation Checklists
• used to record individuals’
behaviors and are mostly used when
researchers want to measure an
individual’s actual behavior.
Questionnaires
In quantitative research, questionnaires use
the following approaches:
a. scale (usually Likert scale)
b. conversion of responses into numerical
values
e.g. strongly agree as 5, agree as 4, neutral
as 3, disagree as 2, and strongly disagree as 1
Ways in Developing Research
Instrument
adopting an instrument
adapting or modifying an existing
instrument
the researcher made his/her
instrument
How to Construct Research
Instruments
1. State your research objectives.
2. Ask questions about your
objectives.
3. Gather the required information.
4. Formulate questions.
The general rule in constructing a
research instrument is that it must
ensure that the questions in the
instrument are relevant to the
objectives of the study.
Characteristics of a Good Research
Instrument
Valid and
01 Concise 03
reliable
Easily
02 Sequential 04
tabulated
Another important consideration in constructing a
research instrument is how to establish its validity
and reliability:
Validity
• A research instrument is considered
valid if it measures what it is
supposed to measure.
Types of Validity of Instrument
a. Construct Validity – evaluates whether a
measurement tool really represents the
thing we are interested in measuring. It’s
central to establishing the overall validity
of a method.
Types of Validity of Instrument
What is a construct?
• A construct refers to a concept or
characteristic that can’t be directly observed,
but can be measured by observing other
indicators that are associated with it.
Types of Validity of Instrument
What is a construct?
• Constructs can be characteristics of individuals,
such as intelligence, obesity, job satisfaction, or
depression; they can also be broader concepts
applied to organizations or social groups, such as
gender equality, corporate social responsibility,
or freedom of speech.
Types of Validity of Instrument
Example:
There is no objective, observable entity called
“depression” that we can measure directly.
However, based on existing psychological
research and theory, we can measure depression
based on a collection of symptoms and
indicators, such as low self-confidence and low
energy levels.
Types of Validity of Instrument
b. Content Validity – ability of the test
items to include important characteristics
of the concept intended to be measured.
Types of Validity of Instrument
b. Content Validity
To produce valid results, the content of a test,
survey, or measurement method must cover all
relevant parts of the subject it aims to measure. If
some aspects are missing from the measurement (or
if irrelevant aspects are included), the validity is
threatened and the research is likely suffering from
omitted variable bias.
Types of Validity of Instrument
Example:
A mathematics teacher develops an end-of-semester
algebra test for her class. The test should cover every
form of algebra that was taught in the class. If some
types of algebra are left out, then the results may not
be an accurate indication of students’ understanding of
the subject. Similarly, if she includes questions that
are not related to algebra, the results are no longer a
valid measure of algebra knowledge.
Types of Validity of Instrument
c. Face Validity – considers how suitable
the content of a test seems to be on the
surface. It’s similar to content validity,
but face validity is a more informal and
subjective assessment.
Types of Validity of Instrument
Example:
You create a survey to measure the regularity of
people’s dietary habits. You review the survey
items, which ask questions about every meal of
the day and snacks eaten in between for every
day of the week. On its surface, the survey
seems like a good representation of what you
want to test, so you consider it to have high
face validity.
Types of Validity of Instrument
c. Face Validity
As face validity is a subjective
measure, it’s often considered the weakest
form of validity. However, it can be useful
in the initial stages of developing a method.
Types of Validity of Instrument
d. Criterion Validity – tells whether a
certain research instrument can give the
same result as other similar instruments.
Types of Validity of Instrument
d. Criterion Validity
To evaluate criterion validity, you calculate
the correlation between the results of your
measurement and the results of the criterion
measurement. If there is a high correlation, this
gives a good indication that your test is
measuring what it intends to measure.
Types of Validity of Instrument
Example:
A university professor creates a new test to measure
applicants’ English writing ability. To assess how well
the test measures students’ writing ability, she finds
an existing test that is considered a valid
measurement of English writing ability and compares
the results when the same group of students take
both tests. If the outcomes are very similar, the new
test has high criterion validity.
Another important consideration in constructing a
research instrument is how to establish its validity
and reliability:
Reliability
• refers to the consistency of the
measures or results of the
instrument.
Reliability of Instrument
a. Test-retest Reliability – measures the
consistency of results when you repeat the
same test on the same sample at a
different point in time. You use it when you
are measuring something that you expect
to stay constant in your sample.
Reliability of Instrument
Example:
You devise a questionnaire to measure the IQ of
a group of participants (a property that is
unlikely to change significantly over time).You
administer the test two months apart to the
same group of people, but the results are
significantly different, so the test-retest
reliability of the IQ questionnaire is low.
Reliability of Instrument
b. Interrater Reliability (also called
interobserver reliability) – measures the
degree of agreement between different
people observing or assessing the same
thing. You use it when data is collected by
researchers assigning ratings, scores, or
categories to one or more variables.
Reliability of Instrument
Example:
A team of researchers observe the progress of
wound healing in patients. To record the stages of
healing, rating scales are used, with a set of
criteria to assess various aspects of wounds. The
results of different researchers assessing the same
set of patients are compared, and there is a strong
correlation between all sets of results, so the test
has high interrater reliability.
Reliability of Instrument
c. Parallel Forms Reliability – measures the
correlation between two equivalent
versions of a test. You use it when you have
two different assessment tools or sets of
questions designed to measure the same
thing.
Reliability of Instrument
Example:
A set of questions is formulated to measure financial
risk aversion in a group of respondents. The questions
are randomly divided into two sets, and the
respondents are randomly divided into two groups.
Both groups take both tests: group A takes test A
first, and group B takes test B first. The results of the
two tests are compared, and the results are almost
identical, indicating high parallel forms reliability.
Reliability of Instrument
d. Internal Consistency – assesses the
correlation between multiple items in a
test that are intended to measure the same
construct.
Reliability of Instrument
Example:
A group of respondents are presented with a set of
statements designed to measure optimistic and pessimistic
mindsets. They must rate their agreement with each
statement on a scale from 1 to 5. If the test is internally
consistent, an optimistic respondent should generally give
high ratings to optimism indicators and low ratings to
pessimism indicators. The correlation is calculated
between all the responses to the “optimistic” statements,
but the correlation is very weak. This suggests that the
test has low internal consistency.
Reliability of Instrument