Screening for Disease
Screening has been defined as:
“ the search for unrecognized disease or defect by means of
rapidly applied tests, examinations or other procedures in
apparently healthy individuals.”
To bring health examination within the reach of large masses
of people with minimal expenditures of time and money, a
number of alternatives approaches have into use.
Screening tests are primarily:
conserving the physician time
capable of wide application
relatively inexpensive
Screening and Diagnostic Tests Contrasted
Screening test Diagnostic test
1. Done on apparently healthy Done on those with indication
or sick
2. Applied to groups Applied to single patient
(single disease considered) (All diseases are considered)
3. Test results are arbitrary and Diagnosis not final, is the sum
final of all evidence
4. Based on one criterion or Number of symptoms, signs
cut-off point (e.g. diabetes) and laboratory findings.
5. Less accurate More accurate
6. Less expensive More expensive
7. Not a basis for treatment Used as a basis for treatment
• Screening is testing for infection or disease in
individuals or populations who are not seeking
health care.
• The basic purpose of screening is to sort out from
a large group of apparently healthy persons those
likely to have the disease or at increased risk of
the disease under study.
• To bring those who are “ apparently abnormal”
under medical supervision and treatment.
Uses of Screening
a) Case Detection: This is also known as “ prescriptive
screening. It is defined as the presumptive identification
of unrecognized disease, which does not arise from a
patient’s request, e.g. neonatal screening. People are
screened primarily for their own benefit.
b) Control of Disease: This is also known as prospective
screening. People are examined for the benefit of other,
e.g. screening of immigrants from infectious disease
such as TB and Syphilis to protect the home population.
The screening program may, by leading to early
diagnosis permit more effective treatment and reduce
the spread of infectious disease and/or mortality from
the disease.
c) Research Purpose: There are many chronic diseases
whose natural history is not fully known ( e.g., cancer,
hypertension). Screening may aid in obtaining more basic
knowledge about the natural history of such diseases.
d) Educational opportunities: Screening program provide
opportunities for creating public awareness and educating
health professionals.
Validity (Accuracy)
• The term validity refers to what extent the test
accurately measures which it purports to measure.
• Validity express the ability of a test to separate or
distinguish those who have the disease from those
who do not.
• For example Glycosuria is a useful screening test
for diabetes, but a more valid or accurate test is
the glucose tolerance test.
• Accuracy refers to the closeness with which
measured values agree with “ true” values.
Two Components of Validity
Sensitivity and Specificity
• Both these components should be
considered when assessing the
accuracy of the diagnostic test.
• Expressed in percentage
• Both are usually determined by
applying the test to one group of persons
having the disease, and to a reference
group not having the disease.
Screening Test Results by Diagnosis
Screening test results Diagnosis Total
Diseased Not diseased
Positive a (True positive) b (False positive) a+b
Negative c (False negative) d (True negative) c+d
Total a+c b+d a+b+c+d
“a” denotes those individuals found positive
on the test who have the condition or
disorder being studied (i.e., true positives).
“b” includes those who have the positive test
result but who do not have the disease (i.e.,
false positives).
“c” includes those with negative test result
who have the disease (i.e., false negative)
Finally, those with negative results who do
not have the disease are included in group
“d” (i.e. true negatives).
The following measures are used to evaluate a screening test:
• Sensitivity = a/(a+c) x 100
• Specificity = d/(b+d) x 100
• Predictive Value of a positive test = a/(a+b) x 100
• Predictive Value of a negative test = d/(c+d) x 100
• Percentage of false negatives = a/(a+c) x 100
• Percentage of false Positives = b/(b+d) x 100
Sensitivity
• It is a statistical index of diagnostic
accuracy.
• Defined as the ability of a test to identify
correctly all those who have the disease,
i.e., true positive.
• A 90 percent sensitivity means that 90
percent of the diseased people screened by
the test will give a “true positive” result and
the remaining 10 percent a “false negative”
result.
Specificity
• It is defined as the ability of a test to
identify correctly those who do not have the
disease, i.e., “ true negatives”.
• 90 % specificity means that 90 percent of
the non-diseased persons will give “true
negative” result, 10 % pf the non-diseased
persons screened by the test will be wrongly
classified as “diseased” when they are not
Screening Test Results by Diagnosis
Screening test results Diagnosis Total
Diseased Not diseased
Positive a (True positive) b (False positive) a+b
40 20 60
Negative c (False negative) d (True negative) c+d
100 9840 9940
Total a+c b+d a+b+c+d
140 9860 10,000
a) Sensitivity (true positive)= 40/140 x 100 = 28.57%
b) Specificity (true negative) = 9840/9860 x 100 = 99.79%
c) False negative = 100/140 x 100 = 71.4 %
d) False positive = 20/9860 x 100 = 0.20 %
e) Predictive value of positive test = 40/60 x 100 = 66.67%
f) Predictive value of negative test
= 9840/9940 x 100 = 98.9%
The more prevalent a disease the more accurate will be the
predictive value of a positive screening test .
Reliability of screening test
• It is also known as repeatability.
• Factors contribute to the variation between test
are:
– Intra subject variation
• Many human characteristics vary because of the change in
time of day even during a short period of time, and because of
conditions under which certain tests are conducted (e.g. pp
glucose )
– Inter-observer variation: variation between observers
– Overall percent agreement (OPA)
= (a+d)/(a+b+c+d) x 100
Kappa Statistics
• If somebody wants to know how well two observers read
x-rays. To what extent do their readings agree beyond
what we expect by chance alone? One approach to answer
this question is to calculate the Kappa statistics.
(%OA) – (%EA)
K = ------------------------------------
100 – (%EA)
Where, OA = observed agreement
EA = expected agreement by chance alone
First Second observation Total
observation TB NTB
TB A= 200 B= 100 300
NTB C=100 D=600 700
Total 300 700 1000
Overall percent agreement
200 + 600
OPA = ----------------------------
200 +100+100+6000
= 80%
Expected frequency (observation)
• Expected agreement
• for a cell ‘a’ = 300 x 300 / 1000 = 90
• For cell ‘b’ = 300 x 700 / 1000 = 210
• For cell ‘c’ = 700 x 300 / 1000 = 210
• For cell ‘d’ = 700 x 700 / 1000 = 490
Therefore percent agreement expected by chance alone is :
90+490/ 1000 = 58%
So Kappa statistics
80-58 / 100-58 = 0.52
Kappa >0.75 = excellent agreement beyond chance
<0.40 = poor agreement
Kappa that of 0.40 to 0.75 represent intermediate to good
agreement