Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
21 views21 pages

Quantitative Research Designs and Methods

This document outlines quantitative research methods taught by Dr. Patrick C. De Leon. It discusses quantitative research definitions and attributes, as well as research designs including experiments, quasi-experiments, and surveys. It also covers statistical analysis techniques for summarizing and making inferences from data, such as statistical inference, modeling, and analyzing large datasets. The goal is to teach students how to generate conclusions about processes and phenomena using standardized, quantifiable measurements and statistical analysis.

Uploaded by

itlog
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views21 pages

Quantitative Research Designs and Methods

This document outlines quantitative research methods taught by Dr. Patrick C. De Leon. It discusses quantitative research definitions and attributes, as well as research designs including experiments, quasi-experiments, and surveys. It also covers statistical analysis techniques for summarizing and making inferences from data, such as statistical inference, modeling, and analyzing large datasets. The goal is to teach students how to generate conclusions about processes and phenomena using standardized, quantifiable measurements and statistical analysis.

Uploaded by

itlog
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Q UA N T I TAT I V E R E S E A R C H

DESIGNS AND METHODS


DR. PATRICK C. DE LEON

ASSOCIATE PROFESSOR OF ECONOMICS AND PUBLIC ADMINISTRATION

EXTENSION PROGRAMS IN PAMPANGA AND OLONGAPO

UNIVERSITY OF THE PHILIPPINES DILIMAN


1. QUANTITAIVE RESEARCH DEFINED
2. ATTRIBUTES OF QUANTITATIVE
RESEARCH
OUTLINE OF 3. EXPERIMENTS
PRESENTATION 4. QUASI-EXPERIMENTS
5. SURVEYS
6. STATISTICAL ANALYSIS
7. STATISTICAL INFERENCE
8. STATISTICAL MODELING
9. BIG DATA AND DATA MINING
QUANTITATIVE RESEARCH DEFINED
▪ Quantitative research is a method of generating inferences about
a process or a phenomenon using quantifiable measurements of
the characteristics of various stakeholders involved.
▪ Data in quantitative research are numbers and can easily be
summarized and utilized in the process of inferences through an
appropriate statistical method.
▪ The data generating process and the measurement process are
standardized and can be verified for replicability.
▪ A disadvantage of quantitative research is the lack of
understanding of the feelings of respondents.
ATTRIBUTES OF QUANTITATIVE RESEARCH
ATTRIBUTE EXPLANATION
Instrument Structured questionnaire or a validated scale
Size of the sample Large (standardized instruments allow data collection by several interviewers)
Data converted into number measures (scores, frequencies, quantitative
Type of data
characteristics)
Hypotheses or questions Very specific and focused
Data is summarized through statistical methods and used to infer about the
Inference
population
Generalizability of Dependent on the representativeness of the sample for the population it ought to
results represent
EXPERIMENTS

▪ Role of the researcher:


➢ The researcher manipulates the events that will take place and
ensures the comparability of the experimental and control groups
through random assignment of subjects to conditions.
▪ Number of groups studied:
➢ There are at least two groups: (1) experimental group – exposed
to the experimental variable; and (2) control group – not exposed
to the experimental variable
▪ Timing of observation:
➢ Observations can be made at various points during the
experiment – at the end of the experiment, or before and after
▪ Ultimately, the choice of options will depend on the willingness of
the participants to be involved and the kind of topic being studied.
QUASI-EXPERIMENTS

▪ Quasi-experiments are like experiments in that the researcher


manipulates the experimental variable.
▪ However, they are not identical to experiments because of the
absence of random assignment of subjects to conditions.
▪ Hence, the designs under this mode require pretests to determine the
comparability between the experimental and control groups.
▪ In instances where no control group is available to compare the
performance of the experimental group, several assignments are
undertaken before the start of the experiment.
▪ The lack of pretests or comparison groups makes some designs very
weak because of the difficulty in distinguishing how much can be
attributed to the experimental variable vis-à-vis the dependent
variable.
SURVEYS

▪ Observations by researchers of events occurring in the


natural setting. No manipulation takes place to make events
occur.
▪ Observations are conducted not only while events are taking
place. Past events may be reconstructed, and future ones
anticipated from the points of view of groups of people.
▪ The number of groups to be studied or contrasted depends
on the type of research problem raised. It is very important
in causative propositions, but not imperative in relational and
descriptive studies.
▪ The number of participants who will represent the
population is an important basis for generalizing.
SURVEYS

▪ For group report: Portus et al. (2020), chapter 4,


pp. 46-65)
➢ Measurement and data
➢ Sample selection
➢ Questionnaire development
➢ Survey operations
➢ Data processing
STATISTICAL ANALYSIS

▪ Statistical science aims to transform the data into


information that can be used in decision-making.
▪ You start from the population and measure
characteristics of interest (variables).
▪ Four avenues to collect measurements:
1. Survey sampling
2. Experimentation
3. Time series compilation
4. Huge data compilation
STATISTICAL ANALYSIS

▪ In survey sampling, you plan the selection of a


representative of the target population.
▪ You then access the sample units to measure/collect data in
a standardized fashion.
▪ The data are then summarized/aggregated into information
that is generalized into the population.
▪ The sample selection procedure and the corresponding
selection probabilities of units to be part of the sample
define the sampling distribution.
▪ The sampling distribution is used as the basis of statistical
inference.
STATISTICAL ANALYSIS

▪ Experimentation is the process of inducing the


manifestation of an effect by introducing treatment to
experimental units under a controlled setting.
▪ All other conditions are controlled, induced to be uniform
across all experimental units so that the manifestation of
treatment effects can be attributed solely to the treatment.
▪ At the end of the experimental period, responses are
measured, and data are generated.
▪ Data resulting from experiments are analyzed by measuring
correlations, estimating treatment effects, testing for
differences, or comparison of responses between two or
more levels of treatment.
STATISTICAL ANALYSIS

▪ Sometimes, a phenomenon is best understood or characterized


if it is observed over a prolonged period.
▪ Inflation and its effect on sales of real estate, for instance, can
be best understood when you are able to observe the
fluctuations in inflation and sales level every month for three
years.
▪ You can also generate data by observing variables indicative of
the behavior of the phenomenon over time.
▪ These are called time series data. Time series data are analyzed
via time-domain or frequency-domain to understand the
dependence structure over time and between variables.
▪ Recently, many researchers have benefited
from the emergence of huge compilation
data.
▪ Due to the increased capacity of data
storage at lower cost, data from
STATISTICAL customer/client engagement have been
compiled and made easily available.
ANALYSIS ▪ Consider telecommunication customers
who send so many SMS to different
service providers or bank clients making
withdrawals from various ATM machines.
▪ Techniques in data mining, computational
statistics, and nonparametric methods are
often used to translate these data into
meaningful and useful information.
STATISTICAL
ANALYSIS
▪ Statistical analyses often follow the
theme of descriptive analysis or
statistical inference that convert these
data into information.
▪ In the process of evaluating usability
of these information, you must
inquire about statistical optimality
and/or statistical robustness.
▪ The framework for statistical analysis
is summarized in Figure 3.
▪ Statistical inference is the method of
generalizing about some characteristics of
the population based on the information
provided by the data collected from a
STATISTICAL sample.

INFERENCE ▪ The generalization can be in terms of


estimating the value of a parameter of
interest (estimation), e.g., average income
in the population, or in terms of
verification of certain hypothesis about
the characteristics of the population
(hypothesis testing), e.g., whether average
income in the population exceeds the
poverty threshold.
STATISTICAL MODELING
▪ A model is a set of assumptions that summarizes the
structure of a system.
▪ Statistical modeling is an important analytical tool as it
enables researches to consider in a coherent and unified
procedure, the complex inter-relationships between
phenomena, and to isolate and make judgement about the
separate effects of each.
▪ Purposes of modeling:
➢ Understand causality and develop a theory
➢ Predict
➢ Assess the effects of different characteristics
➢ Reduce the dimensionality of the data
STATISTICAL MODELING

▪ Development of the model:


➢ Collect data and use scatter plots to visualize and
explore possible relationships between the
dependent and independent variables.
▪ Three phases of the model building process: (1) ➢ This leads to the initial postulated model that will
planning; (2) development of the model; and (3) then be estimated, assumptions validated, and
verification and maintenance. remedies to regression problems implemented (if
any).
▪ Planning: ➢ This leads to the best model, given the present
➢ Define the problem clearly and identify measurable data.
dependent and independent variables. ➢ The resulting model is further verified by checking
its adequacy vis-à-vis the research problem.
➢ The model is further evaluated on whether it is
consistent with the existing theory, supports a
certain theory, or provides an alternative to existing
theory.
STATISTICAL MODELING
▪ Finally, once new data are available, models are ▪ For group report:
updated, and verified if causal relationships are ➢ Linear regression model
consistent, before new theories are presented, ➢ Logistic regression
and sufficiently supported by empirical ➢ Regression on count data
evidence.
➢ Panel data and multilevel models
▪ The process of modeling is summarized in a
package of statistical tools called regression
analysis.
▪ Regression analysis involves the identification
and estimation of functions that establish
causality between and among variables.
BIG DATA AND DATA MINING
▪ Big data is best characterized by 5 Vs: ▪ Three main classes of big data:
1. Volume (size) 1. Social networks – human-sourced information,
2. Velocity (rate of production) e.g., Facebook, Twitter, blogs, mobile data
content, and internet searches
3. Variety (format and representation)
2. Traditional business systems – process-mediated
4. Veracity (trust and uncertainty)
data, which are usually generated by public
5. Variability (complexity) agencies, e.g., medical records, or by private
businesses, e.g., commercial transactions, e-
commerce, credit cards, loyalty cards, etc.
3. Machine-generated data – coming from fixed
censors (e.g., weather, traffic, scientific censors),
or mobile censors (e.g., call and data records in
mobile phones)
▪ Data mining is a hybrid of statistical
theory or methods and various
computing algorithms in filtering
information from the big data.

BIG DATA AND ▪ The common themes of data mining


algorithms revolve around predictive
DATA MINING modeling, pattern recognition,
segmentation, and identification and
characterization of extreme values or
events.
▪ Big data have been used extensively in
analyzing consumer behavior and in the
development of strategic corporate plans
T H AT I S A L L
F O R N OW.
T H A N K YO U !

You might also like