CAPSTONE
POPULATION AND SAMPLE Power Analysis - are two principles going to use this
Population approach:
-is the totality of all the objects, elements, persons, and
characteristics under consideration. 1. STATISTICAL POWER - probability of rejecting the
-is understood that this population possesses common null hypothesis
characteristics about which the research aims to explore. -is a relationship between the independent and dependent
variables of the research study
TWO TYPES OF POPULATION -ideal statistical power of a research study is 80%
1. ACTUAL POPULATION-target population -used to identify the sufficient sample
2. ACCESSIBLE POPULATION-portion of the population size for measuring the effect size of a certain treatment
in which the researcher has reasonable access 2. EFFECT SIZE - level of difference between the
experimental group and the control group
-extent of the relationship between these two variables
SAMPLING -pertains to the systematic process of -the higher the effect size, means the greater the level
selecting the group to be analyzed difference between the experimental and control groups
-to get information from a group that represents the target
population PROBABILITY SAMPLING IN QUANTITATIVE RESEARCH
SIMPLE RANDOM SAMPLING - way of choosing individuals in
SAMPLE- representative subset of the population which all members of the accessible population are given an
equal chance to be selected
APPROACHES IN IDENTIFYING THE SAMPLE SIZE -fishbowl technique, roulette wheel, or use of the table of
random numbers
HEURISTICS- refers to the general rule or rule of the thumb
for sample size. STRATIFIED RANDOM SAMPLING - gives an equal chance to
all members of the population to be chosen.
CLUSTER SAMPLING - applied in large-scale studies,
geographical spread out of the population involves grouping
of the population according to subgroups or clusters
-method where multiple clusters of people from the chosen
population
SYSTEMATIC SAMPLING- selecting samples every nth
(example every 2nd, 5th) of the chosen population until
LITERATURE REVIEW - reading similar or related literature
arriving at a desired total number of sample size
and studies to your current research study
-the selection is based on a predetermined interval
-to recall how these studies determine sample size.
-dividing the population size by the sample size, the interval
-approach increases the validity of your sampling procedure
will be obtained.
FORMULAS - established for the computation of an
CHARACTERISTICS OF A GOOD RESEARCH INSTRUMENT
acceptable sample size.
CONCISE- a good research instrument is concise in length
yet can elicit the needed data.
n SEQUENTIAL - questions or items must be arranged well
SLOVIN'S FORMULA:n= 2
1+ Ne VALID AND RELIABLE – get more appropriate and accurate
N-population size information.
n-sample size EASILY TABULATED - crafting the instruments, the
e-desired margin of error researcher makes sure that the variable and research
600 questions are established
n= 2 -important basis for making items in the research
1+600 ( 0.05 ) instruments.
600 WAYS IN DEVELOPING RESEARCH INSTRUMENT
¿ adopting an instrument
1+ 600(0.0025)
600 modifying an existing instrument when the available
¿ =240 instruments do not yield the exact data that will
1+ 1.5 answer the research problem.
researcher made his own instrument that There are three ways to measure the internal consistency;
corresponds to the variable and cope of his current through the split-half coefficient, Cronbach’s alpha, and
study. Kuder-Richardson formula.
COMMON SCALES USED IN QUANTITATIVE RESEARCH
Likert Scale - respondents were asked to rate or rank
statements according to the scale provided. DATA ANALYSIS
SEMANTIC DIFFERENTIAL - a series of bipolar adjectives will DATA ANALYSIS - is a process in which gathered information
be rated by the respondents. are summarized
-qualitative gathered information were break down and
VALIDITY ordered into categories in order to draw trends or patterns in
-A research instrument is considered valid if it measures what a certain condition
it supposed to measure. -quantitative research, the numerical data collected
-When measuring oral communication proficiency level of is not taken as a whole.
students, speech performance using rubric, or rating scale is These numerical data are usually subject to statistical
more valid than students are given multiple choice tests. treatment depending on the nature of data and the type of
research problem presented.
TYPES OF VALIDITY OF INSTRUMENT The statistical treatment makes explicit the different
FACE VALIDITY- “logical validity.” statistical methods and formulas needed to analyze the
-calls for an initiative judgment of the instruments as it research data.
“appear.” PLANNING YOUR DATA ANALYSIS
-looking at the instrument, the researcher decides if it is valid. Descriptive Statistical Technique - provides a summary of the
ordered or sequenced data from your research sample.
CONTENT VALIDITY - an instrument that is judged Examples of these tools are frequency distribution, measure
with content validity meets the objectives of the study. of central tendencies (mean, median, mode), and standard
-done by checking the statements or questions if this deviation.
elicits the needed information Inferential Statistics is used when the research study focuses
–provide specific elements that should be measured by the on finding predictions; testing hypothesis; and finding
instrument interpretations, generalizations, and conclusions.
CONSTRUCT VALIDITY -corresponds to the theoretical TYPES OF STATISTICAL ANALYSIS OF VARIABLE
construct of the study 1. univariate analysis means analysis of one variable
-concerning if a specific measure relates to other measures 2. Analysis of two variables such as independent and
dependent variables refer to bivariate analysis
CONCURRENT VALIDITY - instrument can predict results like 3. multivariate analysis involves analysis of the
those similar tests already validated multiple relations between multiple variables
PREDICTIVE VALIDITY - instrument can produce results
similar to those similar tests that will be employed in the
future
-useful for the aptitude test
RELIABILITY
-refers to the consistency of the measures or results of the Test of Relationship between Two Variables
instrument -Pearson’s r (parametric)
-Phi coefficient (non-parametric for nominal and dichotomous
TEST-RETEST RELIABILITY - achieved by giving the variables)
same test to the same group of respondents twice. The -Spearman’s rho (non-parametric for ordinal variable)
consistency of the two scores will be checked. Test of Difference between Two Data Sets from One Group
-T-test for dependent samples (parametric)
-McNemar change test (non-parametric for nominal and
EQUIVALENT FORMS RELIABILITY - established by
dichotomous variables)
administering two identical tests except for wordings to the
-Wilcoxon signed-rank test (non-parametric for ordinal
same group of respondents.
variable)
Test of Difference between Two Data Sets from Two
INTERNAL CONSISTENCY RELIABILITY - determines how well
Different Groups
the items measure the same construct
-T-test for independent samples (parametric)
-reasonable that when a respondent gets a high score in one -Two-way chi-square (non-parametric for nominal variable)
item, he will also get one in similar items. -Mann-Whitney U test (non-parametric for ordinal variable)
Test More than Two Population Means
- Analysis of Variance or ANOVA (parametric)
Test the Strength of Relation or Effect or Impact
-Regression (parametric)
TECHNIQUES IN COLLECTING QUANTITATIVE DATA
Observation. It is gathering information about a certain
condition by using senses.
Survey. Data gathering is done through interview or
questionnaire
Experiment.Use treatment or intervention. After the chosen
subjects, participants, or respondents undergone the
intervention, the effects of such treatment will be measured.
THREE PHASES IN DATA COLLECTION
(1) before you will gather the data
(2) what to do during the actual gathering of data
(3) the things to consider after data has been gathered.