Xyron Mer B.
Quiñones
11-HE
Mr. Vaswani
How research instruments are validated
According to my research there are 3 parts for a research instruments to be validated. Those are
the instrument, validity and reliability. The first part is Instrument. Instrument is the general term
that researchers use for a measurement device (survey, test, questionnaire, etc.). To help
distinguish between instrument and instrumentation, consider that the instrument is the device
and instrumentation is the course of action (the process of developing, testing, and using the
device). Instruments fall into two broad categories, researcher-completed and subject-completed,
distinguished by those instruments that researchers administer versus those that are completed by
participants. Researchers chose which type of instrument, or instruments, to use based on the
research question. According to it, it is best to use an existing instrument, one that has been
developed and tested numerous times, such as can be found in the Mental Measurements
Yearbook. Second, validity. Validity is the extent to which an instrument measures what it is
supposed to measure and performs as it is designed to perform. It is rare, if nearly impossible,
that an instrument be 100% valid, so validity is generally measured in degrees. As a process,
validation involves collecting and analyzing data to assess the accuracy of an instrument. There
are numerous statistical tests and measures to assess the validity of quantitative instruments,
which generally involves pilot testing. The remainder of this discussion focuses on external
validity and content validity. External validity is the extent to which the results of a study can be
generalized from a sample to a population. Establishing eternal validity for an instrument, then,
follows directly from sampling. Recall that a sample should be an accurate representation of a
population, because the total population may not be available. An instrument that is externally
valid helps obtain population generalizability, or the degree to which a sample represents the
population. Content validity refers to the appropriateness of the content of an instrument. In other
words, do the measures (questions, observation logs, etc.) Reliability. According to the site that I
used for this assignment, reliability can be thought of as consistency. Does the instrument
consistently measure what it is intended to measure? It is not possible to calculate reliability;
however, there are four general estimators that you may encounter in reading research:
1. Inter-Rater/Observer Reliability: The degree to which different raters/observers give
consistent answers or estimates.
2. Test-Retest Reliability: The consistency of a measure evaluated over time.
3. Parallel-Forms Reliability: The reliability of two tests constructed the same way, from the
same content.
4. Internal Consistency Reliability: The consistency of results across items, often measured
with Cronbach’s Alpha.
Usability refers to the ease with which an instrument can be administered, interpreted by
the participant, and scored/interpreted by the researcher.