PARAMETRIC VS.
NON-
PARAMETRIC TESTS
RM
5th Semester
STATISTICAL TESTS
Statistical tests are used to decide whether a hypothesis
about the distribution of one or more of the population
should be rejected or accepted.
Key words: Hypothesis testing
eg: Ha (Alternate Hypothesis) and Ho (Null Hypothesis)
PARAMETRIC TESTS
Parametric tests are statistical tests that make certain assumptions about the data they are analyzing.
• they assume that the data follows a certain distribution (usually a normal distribution),
• they often require other characteristics like similar variances (variance tells us how much the values differ
from the average (mean) of the group.
• the data is measured on an interval or ratio scale.
• These tests are used to compare groups or examine relationships between variables, and they're powerful if
the data meets the necessary assumptions (parameters).
Examples of parametric tests include the t-test (used to compare two groups) and ANOVA-analysis of variance
(used to compare more than two groups).
In short, parametric tests are useful when you can assume your data fits specific patterns, which allows for
more accurate and meaningful results.
NON-PARAMETRIC TESTS
Non-parametric tests are statistical tests that do not require the data to follow specific assumptions, like a
normal distribution.
• They are useful when the data doesn't meet the requirements for parametric tests, such as when data is not
normally distributed or when dealing with small sample sizes.
• They can be used with data that is measured in categories or ranks, which makes them more flexible.
• They are not as powerful as parametric tests when assumptions are met, but they work well with unusual or
less strict data types.
Examples of non-parametric tests include the Mann-Whitney U test (used to compare two groups) and the
Kruskal-Wallis test (used to compare more than two groups), Chi-square test.
In simple terms, non-parametric tests are good for analyzing data that doesn't fit neatly into specific patterns or
requirements.
EXAMPLES
PARAMETRIC TESTS
PARAMETRIC TESTS
t-test first developed by William Sealy Gosset.
A t-test is a statistical test used to compare the averages (means) of two
groups to see if they are significantly different from each other.
For example, if you want to see if two classes of students scored
differently on a math test, you can use a t-test to check if the difference
in their average scores is likely real or just due to random chance
SINGLE SAMPLE
t-test
How to solve (Youtube link)
FORMULA
HOW TO FIND
HOW TO FIND
TWO SAMPLE
t-test How to solve (Youtube link)
A two-sample t-test is used when you want to compare the means (averages) of two different groups to
determine if they are significantly different from each other. It is helpful when you have two independent
sets of data and you want to see if there is a real difference between them.
When to use a two-sample t-test:
1. **Comparing Two Groups**: You have two separate groups you want to compare, such as comparing test
scores of two classes or comparing the heights of boys and girls.
2. **Independent Groups**: The groups should be independent, meaning that the individuals in one group are
not related to the individuals in the other group.
3. **Data Should Be Normally Distributed**: The data should roughly follow a normal distribution, and the test
works best if the sample sizes are similar and variances are not too different.
For example, if you want to know if a new teaching method leads to better test scores compared to a
traditional method, you can use a two-sample t-test to compare the average scores of students taught with
each method.
The paired t-test is used when you want to compare the
means of two related groups or the same group at two
different times (e.g., before and after treatment). It helps
determine if there is a significant difference between paired
observations.
Example: Such as measurements before and after an
intervention, or measurements taken on the same subjects
under two different conditions. It is helpful when you want
to determine if a change or difference is statistically
significant.
How to solve (Youtube link)
When to Use a Z-Test:
**One-Sample Z-Test**: When you want to compare a
sample mean to a known population mean and the population
standard deviation is known.
**Two-Sample Z-Test**: When comparing the means of two
independent samples, and the population standard deviations
for both groups are known.
In practice, the z-test is often used when the sample size is
large (usually \(n > 30\)), which helps satisfy the assumption
of normality.
PEARSON'S CORRELATION
Range: The value of r can range from -1 to +1:
• r=+1: Perfect positive correlation (as one variable
increases, the other increases perfectly).
• r=−1: Perfect negative correlation (as one variable
increases, the other decreases perfectly).
• r=0: No correlation (no predictable relationship).
FORMULA
clickable link
How to solve (Youtube tutorial)