Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
11 views19 pages

Introduction To Research Methodology

The document provides an overview of research methodology, defining research as a systematic study to discover facts or understand problems. It discusses the importance of hypotheses, various research methodologies (qualitative, quantitative, and mixed methods), and the objectives of research, which include exploration, description, explanation, prediction, and influence. Additionally, it highlights the strengths and weaknesses of qualitative and quantitative approaches, as well as the principles of evidence and survey research.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views19 pages

Introduction To Research Methodology

The document provides an overview of research methodology, defining research as a systematic study to discover facts or understand problems. It discusses the importance of hypotheses, various research methodologies (qualitative, quantitative, and mixed methods), and the objectives of research, which include exploration, description, explanation, prediction, and influence. Additionally, it highlights the strengths and weaknesses of qualitative and quantitative approaches, as well as the principles of evidence and survey research.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 19

INTRODUCTION TO RESEARCH METHODOLOGY

Research is a careful, systematic study to discover new facts or understand a problem better. It involves collecting and
analyzing information to increase knowledge or find solutions. Simply put, research is a way of asking questions and
finding answers through investigation.

Hypothesis
A hypothesis is a simple idea or guess that tries to explain something based on what you already know. It’s like an
educated guess that you can test to see if it’s true or false. For example, “If I sleep more, then I will do better in school” is a
hypothesis you can check by observing or experimenting.

Research methodology
Research methodology is the organized way a researcher plans and carries out a study to answer questions or test ideas. It
explains what data will be collected, how it will be gathered, and how it will be analyzed to ensure reliable and valid
results. In short, it’s the overall approach and reasons behind the research methods chosen to solve a problem or explore a
topic.

Sources of Knowledge
The major ways we learn can be classified into experience, expert opinion, and reasoning.

Experience
The idea here is that knowledge comes from experience. Historically, this view was called empiricism (i.e., original
knowledge comes from experience). The term empirical means "based on observation, experiment, or
experience."

Expert Opinion
Because we don’t want to and don’t have time to conduct research on everything, people frequently rely on expert opinion
as they learn about the world. Note, however, that if we rely on an expert’s opinion it is important to make sure that the
expert is an expert in the specific area under discussion and we should check to see if the expert has a vested interest in
the issue.

Reasoning.
Historically, this idea was called rationalism (i.e., original knowledge comes from thought and reasoning). There are two
main forms of reasoning:
• Deductive reasoning (i.e., the process of drawing a specific conclusion from a set of premises).
• Inductive reasoning (i.e., reasoning from the particular to the general).

The Scientific Approach to Knowledge Generation


Science is also an approach for the generation of knowledge. It relies on a mixture of empiricism (i.e., the collection of
data) and rationalism (i.e., the use of reasoning and theory construction and testing).

Dynamics of science.
Science has many distinguishing characteristics:
• Science is progressive.
• Science is rational.
• Science is creative.
• Science is dynamic.
• Science is open.
• Science is "critical."
• Science is never-ending.

Scientific Methods
There are many scientific methods. The two major methods are the inductive method and the deductive method.
• The deductive method involves the following three steps:
1. State the hypothesis (based on theory or research literature).
2. Collect data to test the hypothesis.
3. Make decision to accept or reject the hypothesis.

• The inductive method. This approach also involves three steps:


1. Observe the world.
2. Search for a pattern in what is observed.
1
3. Make a generalization about what is occurring.

Virtually any application of science includes the use of both the deductive and the inductive approaches to the scientific
method either in a single study or over time. The inductive method is as “bottom up” method that is especially
useful for generating theories and hypotheses; the deductive method is a “top down” method that is especially
useful for testing theories and hypotheses.

Theory
The word "theory" most simply means "explanation." Theories explain "How" and "Why" something operates as it does.
Some theories are highly developed and encompass a large terrain (i.e., "big" theories or "grand" theories)

A theory is a well-organized set of ideas or principles that explains why or how something happens. It helps us understand,
predict, and make sense of a phenomenon based on evidence and logical reasoning. So, a theory is like a strong, tested explanation
that connects facts and guides more research.
Example of some famous theories
Here are some famous theory examples:

1. Theory of Gravity – explains why things fall to the ground.


2. Theory of Evolution– explains how species change over time.
3. Germ Theory– shows that tiny germs cause diseases.
4. Theory of Relativity– explains how space and time work, by Einstein.

The Principle of Evidence


According to the principle of evidence, what is gained in empirical research is evidence, NOT proof. This means that
knowledge based on research is ultimately tentative. Empirical research provides evidence; it does not provide proof. Also
note that, evidence increases when a finding has been replicated. Hence, we should take NOT draw firm conclusions from a
single research study.

Objectives of Research
There are five major objectives of research.
1. Exploration. This is done when we are trying to generate ideas about something.
2. Description. This is done when we want to describe the characteristics of something or some phenomenon.
3. Explanation. This is done when we want to show how and why a phenomenon operates as it does. If we are interested in
causality, we are usually interested in explanation.
4. Prediction. This is our objective when our primary interest is in making accurate predictions. Note that the advanced
sciences make much more accurate predictions than the newer social and behavioral sciences.
5. Influence. This objective is a little different. It involves the application of research results to impact the world. A
demonstration program is an example of this.

One convenient and useful way to classify research is into exploratory research,
descriptive research, explanatory research, predictive research, and demonstration
research.

GENERAL METHODOLOGY OF RESEARCH

QUICK OVERVIEW OF THE THREE APPROACH

1. Qualitative Research

Explores ideas, opinions, and experiences

Uses interviews, observations, or open-ended surveys

Example: Studying how students feel about online learning through interviews

2. Quantitative Research

Collects and analyzes numerical data


2
Uses surveys with fixed answers, experiments, statistics

Example: Measuring test scores to see if a new teaching method works

3. Mixed Methods

Combines both qualitative and quantitative approaches

Uses both interviews and statistical data for a fuller picture

Example: Surveying student performance (numbers) and interviewing them on their experience

Qualitative research, broadly defined, means "any kind of research that produces findings not arrived at by
means of statistical procedures or other means of quantification". Where quantitative researchers seek causal
determination, prediction, and generalization of findings, (This means quantitative researchers want to
find out what causes something to happen (causal determination), predict future outcomes
based on data (prediction), and apply their findings to broader groups beyond just their
study sample (generalization). Basically, they look for clear patterns and rules in numbers
that help explain and forecast real-world events.

qualitative researchers seek instead illumination, understanding, and extrapolation to similar situations. Qualitative
analysis results in a different type of knowledge than does quantitative inquiry.(. This statement means
qualitative researchers focus on deep understanding and insight rather than just numbers.
They want to illuminate the meaning behind behaviors and experiences and explain them
clearly.)

There is a kind of continuum that moves from the fictional that is "true"—the novel for example—to the highly controlled and
quantitatively described scientific experiment. Work at either end of this continuum has the capacity to inform
significantly. Qualitative research and evaluation are located toward the fictive end of the continuum without being
fictional in the narrow sense of the term.

Cronbach (1980) claims that statistical research is not able to take full account of the many interaction effects that take
place in social settings. He gives examples of several empirical "laws" that do not hold true in actual settings to illustrate
this point. Cronbach (1980) states that "the time has come to exorcise the null hypothesis," because it ignores effects that
may be important, but that are not statistically significant. Qualitative inquiry accepts the complex and dynamic quality of
the social world.

However, it is not necessary to pit these two paradigms against one another in a competing stance. Patton ( 1990)
advocates a "paradigm of choices" that seeks "methodological appropriateness as the primary criterion for judging
methodological quality." This will allow for a "situational responsiveness" that strict adherence to one paradigm or another
will not .Furthermore, some researchers believe that qualitative and quantitative research can be effectively combined in
the same research project. For example, by using both quantitative and qualitative data, a study of technology-based
materials for the elementary classroom gave insights that neither type of analysis could provide alone. Some differences
between the two can be summarized as follows:

Qualitative Research Quantitative Research

 Phenomenological  Positivistic
 Inductive  Deducto-hypothetico verificative

 Holistic  Particularistic

 Subjective/insider centered (emic)  Objective/outsider centered (etic)

 Process oriented  Outcome oriented

3
 Anthropological worldview  Natural science worldview

 Relative lack of control  Attempt to control variables

 Goal: understanding actor’s view  Goal: finding facts & causes

 Dynamic reality assumed; slice of life  Static reality assumed: relative


constancy in life
 Discovery oriented
 Verification oriented
 Explanatory
 Confirmatory

Strengths and Weaknesses of each Approach

The Quantitative Approach

Strengths

Precision and control. Control is achieved through the sampling and design, and precision through quantitative and
reliable measurement.

Experimentation. This leads to statements about causation, since systematic manipulation of a variable can be shown to
have a direct causal effect on another when other variables have been eliminated or controlled.

Hypotheses. Hypotheses are tested through a deductive approach.

Statistical analyses. Since the data are quantitative, they permit statistical analyses.

In total, this approach provides answers which have a much firmer basis than the layperson’s common sense or intuition
or opinion.

Weaknesses

Inability to cope with the complexity of human beings. Human beings are far more complex than the inert matter that
is studied in physical sciences. This arises because the human is not only acted on by a plethora of environmental forces,
but can interpret and respond to these forces in active way.

Inability to predict multiple responses. A quantitative researcher cannot predict, for example, how a particular child
will respond as the varying perceptions and interpretations of the environment subtly alter the responses.

Ignorance of human individuality. The scientific quantitative approach denigrates human individuality and ability to
think. Its mechanistic ethos tends to exclude notions of freedom, choice and moral responsibility.

Assumption that facts are true and the same for all people all the time. This approach fails to account of people’s
unique ability to interpret their experiences, construct their own meanings and act on these.

Production of trivial findings. This is due to the restriction on and the controlling of variables and produces an artificial
situation, the results of which have no bearing on real life.

The Qualitative Approach

Strengths

Close association with both participants and activities within the settings. This allows the researcher to see and
document the qualities of interaction too often missed by the scientific, more positivistic inquiries.

Insider’s view of the field. This can reveal subtleties and complexities that can go undetected through the use of more
standardized measures.
4
Important role of suggesting possible relationships, causes, effects, and dynamic processes. This is due to the
qualitative descriptions that can highlight subtleties in people’s behaviour and responses.

In-depth information. By showing the determinate limits of accepted procedures, qualitative methods allow people to
see that things could be other than they are.

Weaknesses

Problem of validity and reliability. Because of the subjective nature of qualitative data and its origin in single contexts, it
is difficult to apply conventional standards of reliability and validity.

Much time required for data collection, analysis, and interpretation. There is a critical need for the researcher to
spend a considerable amount of time in order to examine, holistically and aggregately, the interactions, reactions, and
activities.

Reactive effects on subjects. These are due to the intimacy of participant-observer relationships within the setting.

Possible bias. This is also due to the intimacy of participant-observer relationships within the setting.

No generalization. Contexts, situations, events, conditions, and interactions cannot be replicated to any extent nor can
generalizations be made to a wider context than the one studied with any confidence.

Hypotheses

Experimental research methods revolve around hypotheses, educated guesses. We typically start with a hypothesis about
how the results will turn out, i.e., that there is an effect and it is due to the independent variable. This first hypothesis is the
research hypothesis.

Then we hold the possibility that there is no effect of the independent variable on the dependent variable or that the
differences observed are due to chance only. This second hypothesis is the null hypothesis. The first step in experimental
research, then, is ruling out chance. Put another way, we set up an experimental design that will allow us to reject the null
hypothesis. If we can confidently reject the null hypothesis, then we gain confidence in the research hypothesis.

Types of Hypotheses

1. Research hypothesis states that results are due to the IV.

2. Null hypothesis states that differences are due to chance or that there are no differences between treatments (used in
statistical analysis).

Differences between Experimental and Quasi-Experimental Research

Thus far, we have explained that for experimental research we need:

 a hypothesis for a causal relationship;


 a control group and a treatment group;
 to eliminate confounding variables that might mess up the experiment and prevent displaying the causal
relationship; and
 to have larger groups with a carefully sorted constituency; preferably randomized, in order to keep accidental
differences from fouling things up.

But what if we don't have all of those? Do we still have an experiment? Not a true experiment in the strictest scientific
sense of the term, but we can have a quasi-experiment, an attempt to uncover a causal relationship, even though the
researcher cannot control all the factors that might affect the outcome.

5
A quasi-experimenter treats a given situation as an experiment even though it is not wholly by design. The independent
variable may not be manipulated by the researcher, treatment and control groups may not be randomized or matched, or
there may be no control group. The researcher is limited in what he or she can say conclusively.

The significant element of both experiments and quasi-experiments is the measure of the dependent variable, which it
allows for comparison. Some data are quite straightforward, but other measures, such as level of self-confidence in writing
ability, increase in creativity or in reading comprehension are inescapably subjective. In such cases, quasi-experimentation
often involves a number of strategies to compare subjectivity, such as rating data, testing, and surveying.

Rating essentially is developing a rating scale to evaluate data. In testing, experimenters and quasi-experimenters use
ANOVA (Analysis of Variance) and ANCOVA (Analysis of Co-Variance) tests to measure differences between control and
experimental groups, as well as different correlations between groups.

Since we're mentioning the subject of statistics, note that experimental or quasi-experimental research cannot state
beyond a shadow of a doubt that a single cause will always produce any one effect. They can do no more than show a
probability that one thing causes another. The probability that a result is the due to random chance is an important
measure of statistical analysis and in experimental research.

6
SURVEY RESEARCH

The aim of survey research is to measure certain attitudes and/or behaviours of a population or a sample. The attitudes
might be opinions about a political candidate or feelings about certain issues or practices.

Survey research focuses on naturally occurring phenomena. Rather than manipulating phenomena, survey research
attempts to influence the attitudes and behaviours it measures as little as possible. Most often, respondents are asked for
information.

Survey research is primarily quantitative, but qualitative methods are sometimes used, too.

Once in a while, a researcher may be able to gather data from all members of a population. For example, if we want to
know what a neighbourhood thinks about a local land use issue, we may be able to measure all residents of the
neighbourhood if it is not too big. However, most of the time, the population is so large that researchers must sample only
a part of the population and make conclusions about the population based on the sample. Because of this, gaining a
representative sample is crucial in survey research.

Possible sources of bias in survey research

1. Demand characteristics. Respondents tend to say what they think the researcher wants to hear.
2. Acquiescence. Respondents tend to say "yes" more easily than "no."
3. Reactivity. Thinking about the questions tends to change respondents' opinions. For example, we may not have
thought much about environmental damage until a survey asks for our opinions on rainforest depletion.
4. Response Bias. Some people tend to answer more positively or in more extreme terms. If there is a consistent
tendency for one group to give more extreme responses and a consistent tendency for another group to give more
middle-of-the-road responses, we might mistakenly conclude they have different opinions. In fact, we may only be
observing a bias in their response tendencies.

EX POST FACTO RESEARCH

Descriptive designs are designed to gain more information about a particular characteristic within a particular field of
study. A descriptive study may be used to develop theory, identify problems with current practice, justify current practice,
make judgements or identify what others in similar situations may be doing. There is no manipulation of variables and no
attempt to establish causality.

Descriptive designs are also known as ex post facto studies. This literally means "from after the fact". The term is used to
identify that the research in question has been conducted after the variations in the independent variable has occurred
naturally.

Two kinds of design may be identified in ex post facto research – the correlational study and the criterion group
(comparative) study. The basic purpose of the correlational study is to determine the relationship between variables.
However the significant difference from experimental and quasi-experimental design is that causality cannot be
established due to lack of manipulation of independent variables. Correlation does not prove causation. Examples
include many studies of lung cancer. The researcher begins with a sample of those who have already developed the disease
and a sample of those who have not. The researcher then looks for differences between the two groups in antecedents,
behaviours or conditions such as smoking habits.

In the criterion group (comparative) study, the researcher sets out to discover possible causes for a phenomenon being
studied by comparing the subjects in which the variable is present with similar subjects in whom it was absent. If, for
example, a researcher chooses a design to investigate factors contributing to teacher effectiveness, the criterion group,
the effective teachers, and its counterpart, a group not showing the characteristics of the criterion group, are identified by
measuring the differential effects of the groups on classes of children. The researcher may then examine some variable or
event, i.e. the training, skills and personality of the groups to discover what might ‘cause’ only some teachers to be
effective.

7
Types of correlational research designs

Correlational research designs are founded on the assumption that reality is best described as a network of interacting and
mutually-causal relationships. Everything affects--and is affected by--everything else. This web of relationships is not
linear, as in experimental research.

Thus, the dynamics of a system--how each part of the whole system affects each other part--is more important than
causality. As a rule, correlational designs do not indicate causality. However, some correlational designs such as path
analysis and cross-lagged panel designs, do permit causal statements. Correlational research is quantitative.

Bivariate correlation

The relationship between two variables is measured. The relationship has a degree and a direction.

The degree of relationship (how closely they are related) is usually expressed as a number between -1 and +1, the so-
called correlation coefficient. A zero correlation indicates no relationship. As the correlation coefficient moves toward
either -1 or +1, the relationship gets stronger until there is a "perfect correlation" at either extreme.

The direction of the relationship is indicated by the "-" and "+" signs. A negative correlation means that as scores on one
variable rise, scores on the other decrease. A positive correlation indicates that the scores move together, both increasing
or both decreasing.

A student's grade and the amount of studying done, for example, are generally positively correlated. Stress and health, on
the other hand, are generally negatively correlated.

Regression and prediction

If there is a correlation between two variables, and we know the score on one, the second score can be predicted.
Regression refers to how well we can make this prediction. As the correlation coefficients approach either -1 or +1, our
predictions get better. For example, there is a relationship between stress and health. If we know my stress score, we can
predict my future health status score.

Multiple regression

This extends regression and prediction by adding several more variables. The combination gives us more power to make
accurate predictions. What we are trying to predict is called the criterion variable. What we use to make the prediction,
the known variables, are called predictor variables.

If we know not only our stress score, but also a health behaviour score (how well we take care of ourselves) and how our
health has been in the past (whether we are generally healthy or ill), we can more closely predict our health status. Thus,
there are 3 predictors--stress, health behaviour, and previous health status--and one criterion--future health.

Types of comparative research designs

The basic design of comparative studies is similar to an experimentally designed study. The chief difference resides in the
nature of the independent variable, X. In a truly experimental situation, this will be under the control of the investigator
and may therefore be described as manipulable. In the comparative model, however, the independent variable is beyond
the researcher’s control, having already occurred.

SAMPLING
IN QUANTITATIVE AND QUALITATIVE RESEARCH

Sampling refers to drawing a sample (a subset) from a population (the full set). The usual goal in sampling is to produce
a representative sample (i.e., a sample that is similar to the population on all characteristics, except that it includes fewer
people because it is a sample rather than the complete population). Metaphorically, a perfect representative sample would
be a "mirror image" of the population from which it was selected (again, except that it would include fewer people).

8
Terminology Used in Sampling

Here are some important terms used in sampling:


 A sample is a set of elements taken from a larger population.
 The sample is a subset of the population which is the full set of elements or people or whatever we are sampling.
 A statistic is a numerical characteristic of a sample, but a parameter is a numerical characteristic of population.
 Sampling error refers to the difference between the value of a sample statistic, such as the sample mean, and the
true value of the population parameter, such as the population mean. Note: some error is always present in
sampling. With random sampling methods, the error is random rather than systematic.
 The response rate is the percentage of people in the sample selected for the study who actually participate in the
study.
 A sampling frame is just a list of all the people that are in the population. Here is an example of a sampling frame
(a list of all the names in my population, and they are numbered). Note that the following sampling frame also has
information on age and gender included in case we want to draw some samples and do some calculations.

Random Sampling Techniques

The two major types of sampling in quantitative research are random sampling and nonrandom sampling.
 The former produces representative samples.
 The latter does not produce representative samples.

Simple Random Sampling

The first type of random sampling is called simple random sampling.


 It's the most basic type of random sampling.
 It is an equal probability sampling method (which is abbreviated by EPSEM).
 Remember that EPSEM means "everyone in the sampling frame has an equal chance of being in the final sample."
 We should understand that using an EPSEM is important because that is what produces "representative" samples
(i.e., samples that represent the populations from which they were selected).

Simple random samples are not the only equal probability sampling method (EPSEM). It is the most basic and well known,
however. How do we draw a simple random sample? One way is to put all the names from your population into a hat and
then select a subset (e.g., pull out 100 names from the hat). Sampling experts recommend random sampling "without
replacement" rather than random sampling "with replacement" because the former is a little more efficient in producing
representative samples (i.e., it requires slightly fewer people and is therefore a little cheaper).

Systematic Sampling

Systematic sampling is the second type of random sampling.


 It is an equal probability sampling method (EPSEM).
 Remember simple random sampling was also an EPSEM.

Systematic sampling involves three steps:


 First, determine the sampling interval, which is symbolized by "k," (it is the population size divided by the desired
sample size).
 Second, randomly select a number between 1 and k, and include that person in our sample.
 Third, also include each kth element in our sample. For example if k is 10 and we randomly selected number
between 1 and 10 was 5, then we will select persons 5, 15, 25, 35, 45, etc.

When we get to the end of our sampling frame we will have all the people to be included in our sample. One potential (but
rarely occurring) problem is called periodicity (i.e., there is a cyclical pattern in the sampling frame). It could occur when
we attach several ordered lists to one another (e.g., if we take lists from multiple teachers who have all ordered their lists
on some variable such as IQ). On the other hand, stratification within one overall list is not a problem at all (e.g., if we have
one list and have it ordered by gender or by IQ). Basically, if we are attaching multiple lists to one another, there could be a
problem. It would be better to reorganize the lists into one overall list (i.e., sampling frame).
Stratified Random Sampling

The third type of random sampling is called stratified random sampling.


 First, stratify our sampling frame (e.g., divide it into the males and the females if we are using gender as our
stratification variable).
 Second, take a random sample from each group (i.e., take a random sample of males and a random sample of
females). Put these two sets of people together and we now have our final sample. (Note that we could also take a
systematic sample from the joined lists if that’s easier.)

9
There are actually two different types of stratified sampling.
 The first type of stratified sampling, and most common, is called proportional stratified sampling. In proportional
stratified sampling we must make sure the subsamples (e.g., the samples of males and females) are proportional
to their sizes in the population. Note that proportional stratified sampling is an equal probability sampling method
(i.e., it is EPSEM).
 The second type of stratified sampling is called disproportional stratified sampling. In disproportional stratified
sampling, the subsamples are not proportional to their sizes in the population.

Here is an example showing the difference between proportional and disproportional stratified sampling:
 Assume that our population is 75% female and 25% male. Assume also that we want a sample of size 100 and we
want to stratify on the variable called gender.
 For proportional stratified sampling, we would randomly select 75 females and 25 males from the population.
 For disproportional stratified sampling, we might randomly select 50 females and 50 males from the population.

Cluster Random Sampling

In this type of sampling we randomly select clusters rather than individual type units in the first stage of sampling. A
cluster has more than one unit in it (e.g., a school, a classroom, a team). We discuss two types of cluster sampling in the
chapter, one-stage and two-stage (note that more stages are possible in multistage sampling).

The first type of cluster sampling is called one-stage cluster sampling.


 To select a one-stage cluster sample, we first select a random sample of clusters.
 Then we include in our final sample all of the individual units that are in the selected clusters.

The second type of cluster sampling is called two-stage cluster sampling.


 In the first stage we take a random sample of clusters (i.e., just like we did in one-stage cluster sampling).
 In the second stage, we take a random sample of elements from each of the clusters we selected in stage one (e.g.,
in stage two we might randomly select 10 students from each of the 15 classrooms we selected in stage one).

Important points about cluster sampling:


 Cluster sampling is an equal probability sampling method (EPSEM) ONLY if the clusters are approximately the
same size. (Remember that EPSEM is very important because that is what produces representative samples.)
 When clusters are not the same size, we must fix the problem by using the technique called "probability
proportional to size" (PPS) for selecting our clusters in stage one. This will make our cluster sampling an equal
probability sampling method (EPSEM), and it will, therefore, produce representative samples.

Nonrandom Sampling Techniques

The other major type of sampling used in quantitative research is nonrandom sampling (i.e., when we do not use one of the
random sampling techniques). There are four main types of nonrandom sampling:
 The first type of nonrandom sampling is called convenience sampling (i.e., it simply involves using the people who
are the most available or the most easily selected to be in our research study).
 The second type of nonrandom sampling is called quota sampling (i.e., it involves setting quotas and then using
convenience sampling to obtain those quotas). A set of quotas might be given to us as follows: find 25 African
American males, 25 European American males, 25 African American females, and 25 European American females.
We use convenience sampling to actually find the people, but we must make sure we have the right number of
people for each quota.
 The third type of nonrandom sampling is called purposive sampling (i.e., the researcher specifies the
characteristics of the population of interest and then locates individuals who match those characteristics). For
example, we might decide that we want to only include "boys who are in the 7th grade and have been diagnosed
with ADHD" in our research study. We would then, try to find 50 students who meet our "inclusion criteria" and
include them in our research study.
 The fourth type of nonrandom sampling is called snowball sampling (i.e., each research participant is asked to
identify other potential research participants who have a certain characteristic). We start with one or a few
participants, ask them for more, find those, ask them for some, and continue until we have a sufficient sample size.
This technique might be used for a hard to find population (e.g., where no sampling frame exists). For example, we
might want to use snowball sampling if we wanted to do a study of people in our city who have a lot of power in
the area of educational policy making (in addition to the already known positions of power, such as the school
board and the school system superintendent).

Determining the Sample Size When Random Sampling is Used

10
Would we like to know the answer to the question "How big should my sample be?"
 Try to get as big of a sample as we can for our study (i.e., because the bigger the sample the better).
 If our population is size 100 or less, then include the whole population rather than taking a sample (i.e., don't take
a sample; include the whole population).
 Look at other studies in the research literature and see how many they are selecting.
 For an exact number, just look at the table at the end of this handout which shows recommended sample sizes.

In particular, note that we will need larger samples under these circumstances:
 When the population is very heterogeneous.
 When we want to breakdown the data into multiple categories.
 When we want a relatively narrow confidence interval (e.g., note that the estimate that 75% of teachers support a
policy plus or minus 4% is narrower than the estimate of 75% plus or minus 5%).
 When we expect a weak relationship or a small effect.
 When we use a less efficient technique of random sampling (e.g., cluster sampling is less efficient than
proportional stratified sampling).
 When we expect to have a low response rate. The response rate is the percentage of people in our sample who
agree to be in our study.

Sample sizes for various populations of size 10 to 500 million.

N stands for the size of the population. n stands for the size of the recommended sample. The sample sizes are based on the
95 percent confidence level.

N n N n N n N n N n

10 10 110 86 300 169 950 274 4,500 354


15 14 120 92 320 175 1,000 278 5,000 357
20 19 130 97 340 181 1,100 285 6,000 361
25 24 140 103 360 186 1,200 291 7,000 364
30 28 150 108 380 191 1,300 297 8,000 367
35 32 160 113 400 196 1,400 302 9,000 368
40 36 170 118 420 201 1,500 306 10,000 370
45 40 180 123 440 205 1,600 310 15,000 375
50 44 190 127 460 210 1,700 313 20,000 377
55 48 200 132 480 214 1,800 317 30,000 379
60 52 210 136 500 217 1,900 320 40,000 380
65 56 220 140 550 226 2,000 322 50,000 381
70 59 230 144 600 234 2,200 327 75,000 382
75 63 240 148 650 242 2,400 331 100,000 384
80 66 250 152 700 248 2,600 335 250,000 384
85 70 260 155 750 254 2,800 338 500,000 384
90 73 270 159 800 260 3,000 341 1,000,000 384
95 76 280 162 850 265 3,500 346 10,000,000 384
100 80 290 165 900 269 4,000 351 500,000,000 384

Krejcie, R. V. and Morgan, D. W. 1970. Determining Sample Size for Research Activities. Educational and Psychological
Measurement, 30, 607-610
(In Isaac, S. and Michael, W. B. 1981. Handbook in Research and Evaluation. San Diego: EdITS publishers)

Sampling in Qualitative Research

Sampling in qualitative research is usually purposive (see the above discussion of purposive sampling). The primary goal
in qualitative research is to select information rich cases.
There are several specific purposive sampling techniques that are used in qualitative research:
 Maximum variation sampling (i.e., we select a wide range of cases).
 Homogeneous sample selection (i.e., we select a small and homogeneous case or set of cases for intensive study).
 Extreme case sampling (i.e., we select cases that represent the extremes on some dimension).
 Typical-case sampling (i.e., we select typical or average cases).
 Critical-case sampling (i.e., we select cases that are known to be very important).
 Negative-case sampling (i.e., we purposively select cases that disconfirm our generalizations, so that we can make
sure that we are not just selectively finding cases to support our personal theory).
 Opportunistic sampling (i.e., we select useful cases as the opportunity arises).
11
 Mixed purposeful sampling (i.e., we can mix the sampling strategies we have discussed into more complex designs
tailored to our specific needs).

DATA COLLECTION INSTRUMENTS

There are six major methods of data collection.


• Tests (i.e., includes standardized tests that usually include information on reliability, validity, and norms as well as
tests constructed by researchers for specific purposes, skills tests, etc).
• Questionnaires (i.e., self-report instruments).
• Interviews (i.e., situations where the researcher interviews the participants).
• Focus groups (i.e., a small group discussion with a group moderator present to keep the discussion focused).
• Observation (i.e., looking at what people actually do).
• Existing or secondary data (i.e., using data that are originally collected and then archived or any other kind of “data”
that was simply left behind at an earlier time for some other purpose).

Tests
Tests are commonly used in research to measure achievement, and performance. Sometimes, a researcher must develop a
new test to measure the specific knowledge, skills, behavior, or cognitive activity that is being studied.

Strengths and Weaknesses of Tests

Strengths of tests (especially standardized tests)

• Can provide measures of many characteristics of people.


• Often standardized (i.e., the same stimulus is provided to all participants).
• Allows comparability of common measures across research populations.
• Strong psychometric properties (high measurement validity).
• Availability of reference group data.
• Many tests can be administered to groups which saves time.
• Can provide “hard,” quantitative data.
• Tests are usually already developed.
• A wide range of tests is available (most content can be tapped).
• Response rate is high for group administered tests.
• Ease of data analysis because of quantitative nature of data.

Weaknesses of tests (especially standardized tests)

• Can be expensive if a test must be purchased for each research participant.


• Reactive effects such as social desirability can occur.
• Test may not be appropriate for a local or unique population.
• Open-ended questions and probing not available.
• Tests are sometimes biased against certain groups of people.
• Nonresponse to selected items on the test.
• Some tests lack psychometric data.

Questionnaires

A questionnaire is a self-report data collection instrument that is filled out by research participants. Questionnaires are
usually paper-and-pencil instruments, but they can also be placed on the web for participants to go to and “fill out.”
Questionnaires are sometimes called survey instruments, but the actual questionnaire should not be called “the survey.”
The word “survey” refers to the process of using a questionnaire or interview protocol to collect data. For example, we
might do a survey of teacher attitudes about inclusion; the instrument of data collection should be called the questionnaire
or the survey instrument.

• A questionnaire is composed of questions and/or statements.


• One way to learn to write questionnaires is to look at other questionnaires.

One of the well-known types is a Likert Scale because the summated rating scale was pretty much invented by the famous
social psychologist named Rensis Likert.
Strengths and Weaknesses of Questionnaires

Strengths of questionnaires

12
• Good for measuring attitudes and eliciting other content from research participants.
• Inexpensive (especially mail questionnaires and group administered questionnaires).
• Can provide information about participants’ internal meanings and ways of thinking.
• Can administer to probability samples.
• Quick turnaround.
• Can be administered to groups.
• Perceived anonymity by respondent may be high.
• Moderately high measurement validity (i.e., high reliability and validity) for well constructed and validated
questionnaires.
• Closed-ended items can provide exact information needed by researcher.
• Open-ended items can provide detailed information in respondents’ own words.
• Ease of data analysis for closed-ended items.
• Useful for exploration as well as confirmation.

Weaknesses of questionnaires
• Usually must be kept short.
• Reactive effects may occur (e.g., interviewees may try to show only what is socially desirable).
• Nonresponse to selective items.
• People filling out questionnaires may not recall important information and may lack self-awareness.
• Response rate may be low for mail and email questionnaires.
• Open-ended items may reflect differences in verbal ability, obscuring the issues of interest.
• Data analysis can be time consuming for open-ended items.
• Measures need validation.

Interviews

In an interview, the interviewer asks the interviewee questions (in-person or over the telephone).

• Trust and rapport are important.


• Probing is available (unlike in paper-and-pencil questionnaires) and is used to reach clarity or gain additional
information
• Here are some examples of standard probes:

- Anything else?
- Any other reason?
- What do you mean?

Interviews may be quantitative or qualitative.

Quantitative interviews:

• Are standardized (i.e., the same information is provided to everyone).


• Use closed-ended questions.
• It looks very much like a questionnaire. The key difference between an interview protocol and a questionnaire is that
the interview protocol is read by the interviewer who also records the answers.

Qualitative interviews

• They are based on open-ended questions.


• There are three types of qualitative interviews.

1) Informal Conversational Interview


- It is spontaneous.
- It is loosely structured (i.e., no interview protocol us used).

2) Interview Guide Approach


• It is more structured than the informal conversational interview.
• It includes an interview protocol listing the open-ended questions.
• The questions can be asked in any order by the interviewer.
• Question wording can be changed by the interviewer if it is deemed appropriate.

3) Standardized Open-Ended Interview

13
• Open-ended questions are written on an interview protocol, and they are asked in the exact order given on the
protocol.
• The wording of the questions cannot be changed.

Strengths and Weaknesses of Interviews

Strengths of interviews
• Good for measuring attitudes and most other contents of interest.
• Allows probing and posing of follow-up questions by the interviewer.
• Can provide in-depth information.
• Can provide information about participants’ internal meanings and ways of thinking.
• Closed-ended interviews provide exact information needed by researcher.
• Telephone and e-mail interviews provide very quick turnaround.
• Moderately high measurement validity (i.e., high reliability and validity) for well constructed and tested interview
protocols.
• Can use with probability samples.
• Relatively high response rates are often attainable.
• Useful for exploration as well as confirmation.

Weaknesses of interviews
• In-person interviews usually are expensive and time consuming.
• Reactive effects (e.g., interviewees may try to show only what is socially desirable).
• Investigator effects may occur (e.g., untrained interviewers may distort data because of personal biases and poor
interviewing skills).
• Interviewees may not recall important information and may lack self-awareness.
• Perceived anonymity by respondents may be low.
• Data analysis can be time consuming for open-ended items.
• Measures need validation.

Focus Groups

A focus group is a situation where a focus group moderator keeps a small and homogeneous group (of 6-12 people)
focused on the discussion of a research topic or issue.
• Focus group sessions generally last between one and three hours and they are recorded using audio and/or
videotapes.
• Focus groups are useful for exploring ideas and obtaining in-depth information about how people think about an
issue.

Strengths and Weaknesses of Focus Groups

Strengths of focus groups


• Useful for exploring ideas and concepts.
• Provides window into participants’ internal thinking.
• Can obtain in-depth information.
• Can examine how participants react to each other.
• Allows probing.
• Most content can be tapped.
• Allows quick turnaround.

Weaknesses of focus groups


• Sometimes expensive.
• May be difficult to find a focus group moderator with good facilitative and rapport building skills.
• Reactive and investigator effects may occur if participants feel they are being watched or studied.
• May be dominated by one or two participants.
• Difficult to generalize results if small, unrepresentative samples of participants are used.
• May include large amount of extra or unnecessary information.
• Measurement validity may be low.
• Usually should not be the only data collection methods used in a study.
• Data analysis can be time consuming because of the open-ended nature of the data.

Observation

14
In the method of data collection called observation, the researcher observes participants in natural and/or structured
environments. It is important to collect observational data (in addition to attitudinal data) because what people say is not
always what they do.

Observation can be carried out in two types of environments:


• Laboratory observation (which is done in a lab set up by the researcher).
• Naturalistic observation (which is done in real-world settings).

There are two important forms of observation: quantitative observation and qualitative observation.
1) Quantitative observation involves standardization procedures, and it produces quantitative data.

The following can be standardized:


- Who is observed.
- What is observed.
- When the observations are to take place.
- Where the observations are to take place.
- How the observations are to take place.

Standardized instruments (e.g., checklists) are often used in quantitative observation.


Sampling procedures are also often used in quantitative observation:
--Time-interval sampling (i.e., observing during time intervals, e.g., during the first minute of each 10 minute interval).
--Event sampling (i.e., observing after an event has taken place, e.g., observing after teacher asks a question).

2) Qualitative observation is exploratory and open-ended, and the researcher takes extensive field notes.
The qualitative observer may take on four different roles that make up a continuum:

• Complete participant (i.e., becoming a full member of the group and not informing the participants that we are
studying them).
• Participant-as-Observer (i.e., spending extensive time "inside" and informing the participants that we are studying
them).
• Observer-as-Participant (i.e., spending a limited amount of time "inside" and informing them that we are studying
them).
• Complete Observer (i.e., observing from the "outside" and not informing that participants that we are studying
them).

Strengths and Weaknesses of Observational Data

Strengths of observational data


• Allows one to directly see what people do without having to rely on what they say they do.
• Provides firsthand experience, especially if the observer participates in activities.
• Can provide relatively objective measurement of behavior (especially for standardized observations).
• Observer can determine what does not occur.
• Observer may see things that escape the awareness of people in the setting.
• Excellent way to discover what is occurring in a setting.
• Helps in understanding importance of contextual factors.
• Can be used with participants with weak verbal skills.
• May provide information on things people would otherwise be unwilling to talk about.
• Observer may move beyond selective perceptions of people in the setting.
• Good for description.
• Provides moderate degree of realism (when done outside of the laboratory).

Weaknesses of observational data


• Reasons for observed behavior may be unclear.
• Reactive effects may occur when respondents know they are being observed (e.g., people being observed may behave
in atypical ways).
• Investigator effects (e.g., personal biases and selective perception of observers)
• Observer may “go native” (i.e., over-identifying with the group being studied).
• Sampling of observed people and settings may be limited.
• Cannot observe large or dispersed populations.
• Some settings and content of interest cannot be observed.
• Collection of unimportant material may be moderately high.
• More expensive to conduct than questionnaires and tests.
• Data analysis can be time consuming.

15
Secondary/Existing Data
Secondary data (i.e., data originally used for a different purpose) are contrasted with primary data (i.e., original data
collected for the new research study).
The most commonly used secondary data are documents, physical data, and archived research data.

1. Documents. There are two main kinds of documents.


• Personal documents (i.e., things written or recorded for private purposes). Letters, diaries, family pictures.
• Official documents (i.e., things written or recorded for public or private organizations). Newspapers, annual reports,
yearbooks, minutes.

2. Physical data (are any material thing created or left by humans that might provide information about a phenomenon of
interest to a researcher).

3. Archived research data (i.e., research data collected by other researchers for other purposes, and these data are save
often in tape form or CD form so that others might later use the data).

Strengths and Weaknesses of Secondary Data

Strengths of documents and physical data:


• Can provide insight into what people think and what they do.
• Unobtrusive, making reactive and investigator effects very unlikely.
• Can be collected for time periods occurring in the past (e.g., historical data).
• Provides useful background and historical data on people, groups, and organizations.
• Useful for corroboration.
• Grounded in local setting.
• Useful for exploration.

Weaknesses of documents and physical data:


• May be incomplete.
• May be representative only of one perspective.
• Access to some types of content is limited.
• May not provide insight into participants’ personal thinking for physical data.
• May not apply to general populations.

Strengths of archived research data:


• Archived research data are available on a wide variety of topics.
• Inexpensive.
• Often are reliable and valid (high measurement validity).
• Can study trends.
• Ease of data analysis.
• Often based on high quality or large probability samples.

Weaknesses of archived research data:


• May not be available for the population of interest to you.
• May not be available for the research questions of interest to you.
• Data may be dated.
• Open-ended or qualitative data usually not available.
• Many of the most important findings have already been mined from the data.

CONCEPTS OF MEASUREMENT

Nominal scale
• Nominal measurement consists of assigning items to groups or categories.
• No quantitative information is conveyed and no ordering of the items is implied.
• Examples: religious preference, race, and gender

Ordinal Scale
• Measurements with ordinal scales are ordered.
• However, the intervals between the numbers are not necessarily equal.
• For example, on a five-point Likert scale, the difference between 2 and 3 may not represent the same difference as
the difference between 4 and 5.

Interval Scale
16
• There is no "true" zero point for interval scales.
• On interval measurement scales, one unit on the scale represents the same magnitude on the trait or
characteristic being measured across the whole range of the scale.
• For example, on an interval scale of temperature, the difference between 10 and 11 degrees centigrade is the same
as the difference between 11 and 12 degrees centigrade.

Ratio scale
• There is a "true" zero point for ratio scales.
• The difference between points on the scale is precise (as in the measurement of height and weight)
• The scale starts at zero.
• For example, height and weight start at zero (People cannot weigh less than 0.00kg and cannot be less than
0.00mm in length/height.

DESCRIPTIVE STATISTICS

Mean (arithmetic mean)


 The mean is the value each item would have if all the values were shared out equally among all the items. E.g., the
mean of 2 + 2 + 2 + 2 + 4 + 4 + 5 + 6 + 6 + 6 + 8 = 47/11 = 4.27
 The mean takes consideration of all the values.
 The mean can be corrupted by outliers. Thus if the values above referred to the age of children on a children's
ward, the age would be 4.27 but if we add a 15 year old to the sequence the average age = 5.16

Median

 The median is the value of the middle item of a distribution (In the sequence above there are eleven values and
the value occurring in the middle position is the value "4"; thus 4 is the median average for this sequence).
 The median value shows nothing about the other values in the sequence e.g., if the eight year old was replaced by
a 15 year old, and one of the 2 year olds was replaced by a baby of just 2 days old, the median would not have
changed at all.

Mode
 The mode (modal value) is the point of maximum frequency (In the sequence of numbers above, the value "2"
occurs most frequently and is thus the modal value).
 Sometimes there is more than one peak (If the values above referred to the age of children, and the eight year old
child was replaced by a six year old child, there would be two modal values, one of the value "2" and the other of
the value "6". The distribution would then be considered "bi-modal").
 The modal value does not reflect the extremes of the sequence.

Standard Deviation
 Standard Deviation: a measure of spread
 A measure of variation that indicates the typical distance between the scores of a distribution and the mean.
 It is determined by taking the square root of the average of the squared deviations in a given distribution.
 It can be used to indicate the proportion of data within certain ranges of scale values when the distribution
conforms closely to the normal curve.
 The SD says how far away numbers on a list are from their average.

Properties of the Normal Probability Curve


• The graph is symmetric about the mean (the part to the right is a mirror image of the part to the left).
• The total area under the curve equals 100%.
• The curve is always above horizontal axis.
• It appears to stop after a certain point (the curve gets really low).
• The graph is symmetric about the mean.
• The total area under the curve equals 100%.
• Mean to 1 SD = +- 68%

17
• Mean to 2 SD = +- 95%
• Mean to 3 SD = +- 99.7%

INFERENTIAL STATISTICS

Correlation

 Usually abbreviated as r, correlation is a common statistical analysis which measures the degree of relationship
between pairs of interval variables in a sample.
 The range of correlation is from -1.00 to zero to +1.00. 2). It shows a non-cause and effect relationship between
two variables.

t-test

 A t-test is used to determine if the scores of two groups differ on a single variable. For instance, to determine
whether writing ability differs among students in two classrooms, a t-test could be used.
 The independent variable is nominal, and the dependent variable is interval.

ANOVA

 A method of statistical analysis broadly applicable to a number of research designs, used to determine differences
among the means of three or more groups on a variable.
 The independent variables are nominal, and the dependent variable is interval.

18
Correlation:
Pearson
One
Predictor Regression

Analysis of Multiple Multiple


Relationships Predictors Regression

Independent Independent
Groups Samples t-test
Interval
Data
Between Two Dependent
Groups Groups Repeated
Measures t-test
Analysis of
Differences Independent
Independent
Groups
Type of Samples ANOVA
Data
Between Dependent Repeated
Multiple Groups Groups Measures ANOVA

Nominal / Ordinal Correlation:


Ordinal Spearman
Data
Frequency CHI Square

19

You might also like