Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
4 views16 pages

Reliability Psychometrics

The document discusses the concept of reliability in statistics and psychometrics, emphasizing its importance in ensuring consistent results across various measurements. It outlines different types of reliability, including internal consistency, test-retest reliability, parallel forms reliability, alternate form reliability, and interrater reliability. Each type is defined and explained in terms of its application and significance in research and testing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views16 pages

Reliability Psychometrics

The document discusses the concept of reliability in statistics and psychometrics, emphasizing its importance in ensuring consistent results across various measurements. It outlines different types of reliability, including internal consistency, test-retest reliability, parallel forms reliability, alternate form reliability, and interrater reliability. Each type is defined and explained in terms of its application and significance in research and testing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Reliability

Dr Saranya TS
Assistant Professor
 In statistics and psychometrics, reliability is the overall consistency of a measure. A measure is
said to have a high reliability if it produces similar results under consistent conditions.
 It can be defined to the extent to which an experiment, test, or measuring procedure
yields the same results on repeated trials.
 The term reliability in psychological research refers to the consistency of a research
study or measuring test. For example, if a person weighs themselves during the
course of a day they would expect to see a similar reading. Scales which measured
weight differently each time would be of little use.
 If findings from research are replicated consistently they are
reliable. A correlation coefficient can be used to assess the degree of
reliability. If a test is reliable it should show a high positive
correlation.
 There are two types of reliability – internal and external reliability.
• Internal reliability assesses the consistency of results across items
within a test.
 External reliability refers to the extent to which a measure varies
from one use to another.
Internal Consistency
In statistics and research, internal consistency is typically a measure based on the
correlations between different items on the same test. It measures whether several
items that propose to measure the same general construct produce similar scores.
Internal consistency is a reliability measurement in which items on a test
are correlated in order to determine how well they measure the same
construct or concept. Reliability shows how consistent a test or
measurement is. Internal consistency is a check to ensure all of the test
items are measuring the concept they are supposed to be measuring.
Test Re-test Reliability
 Test-retest reliability is a measure of the consistency of a psychological test or
assessment. This kind of reliability is used to determine the consistency of a test
across time. Test-retest reliability is best used for things that are stable over time,
such as intelligence.
 Test-retest reliability is measured by administering a test twice at two different
points in time. This type of reliability assumes that there will be no change in the
quality or construct being measured. In most cases, reliability will be higher when
little time has passed between tests.
 A typical assessment would involve giving participants the same test on two
separate occasions. If the same or similar results are obtained then external
reliability is established. The disadvantages of the test-retest method are that it
takes a long time for results to be obtained.
 The timing of the test is important; if the duration is to brief then participants may
recall information from the first test which could bias the results.
 Alternatively, if the duration is too long it is feasible that the participants could
have changed in some important way which could also bias the results.
Parallel Forms Reliability
 Parallel-forms reliability is gauged by comparing two different
tests that were created using the same content. This is
accomplished by creating a large pool of test items that
measure the same quality and then randomly dividing the
items into two separate tests. The two tests should then be
administered to the same subjects at the same time.
 In the parallel-forms method, two tests that are equivalent in
the sense that they contain the same kinds of items of equal
difficulty but not the same items are administered to the
same individuals. This technique is also referred to as the
method of equivalent forms.
 The most common way to measure parallel forms reliability is
to produce a large set of questions to evaluate the same
thing, then divide these randomly into two question sets.
 The same group of respondents answers both sets, and you
calculate the correlation between the results. High
Alternate Form Reliability

 Alternate-form reliability is the consistency of test results between


two different – but equivalent – forms of a test. Alternate-form
reliability is used when it is necessary to have two forms of the same
tests. – To determine alternate form reliability two forms of the same
test are administered to students and students’ scores are correlated
on the two test forms. The resulting coefficient is called the alternate
form coefficient of reliability. – Alternative-form reliability is needed
whenever two test forms are being used to measure the same thing.
Ideally, the administration of the two forms should be done in a short
time span.
 In order to call the forms “parallel”, the observed score must have the same
mean and variances. If the tests are merely different versions (without the
“sameness” of observed scores), they are called alternate forms
Alternate form Reliability
Difference between Test Re-test and
alternate form reliability
 Test-Retest Reliability: Used to assess the consistency of a measure from
one time to another.
 Parallel-Forms Reliability: Used to assess the consistency of the results of two
tests constructed in the same way from the same content domain.
Interrater Reliability

 In statistics, inter-rater reliability is the degree of agreement among


independent observers who rate, code, or assess the same phenomenon
 Interrater reliability (also called interobserver reliability) measures the
degree of agreement between different people observing or assessing
the same thing. You use it when data is collected by researchers
assigning ratings, scores or categories to one or more variables.
 In an observational study where a team of researchers collect data on
classroom behavior, interrater reliability is important: all the researchers
should agree on how to categorize or rate different types of behavior.
 To measure interrater reliability, different researchers conduct the same
measurement or observation on the same sample. Then you calculate the
correlation between their different sets of results. If all the researchers
give similar ratings, the test has high interrater reliability.
Thank you

You might also like