Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
24 views24 pages

Chap 7 (Research)

Chapter Seven of the document discusses data processing and analysis, outlining the definition and importance of data processing in research. It details the steps involved in data processing, including editing, coding, classification, tabulation, and data visualization, as well as various data analysis techniques. The chapter emphasizes the significance of data quality and the methods used to transform raw data into meaningful information for research purposes.

Uploaded by

Jemsi Mohammed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views24 pages

Chap 7 (Research)

Chapter Seven of the document discusses data processing and analysis, outlining the definition and importance of data processing in research. It details the steps involved in data processing, including editing, coding, classification, tabulation, and data visualization, as well as various data analysis techniques. The chapter emphasizes the significance of data quality and the methods used to transform raw data into meaningful information for research purposes.

Uploaded by

Jemsi Mohammed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Jimma University

CHAPTER SEVEN

DATA PROCESSING AND ANALYSIS

7.1. Definition
Data processing

Data processing refers to certain operations such as editing, coding, computing of the scores, preparation
of master charts, etc.

7.2. Coding, editing and cleaning the data


After collecting data, the method of converting raw data into meaningful statement; includes data
processing, data analysis, and data interpretation and presentation. Data reduction or processing mainly
involves various manipulations necessary for preparing the data for analysis. The process
(of manipulation) could be manual or electronic. It involves editing, categorizing the open-ended questions,
coding, computerization and preparation of tables and diagrams. Data processing is concerned with editing,
coding, classifying, tabulating and charting and diagramming research data. The essence of data processing in
research is data reduction.

1
By: Aleka J.
Jimma University

Data reduction involves winnowing out the irrelevant from the relevant data and establishing order from
chaos and giving shape to a mass of data. Data processing in research consists of five important steps.

1. Editing of data
2. Coding of data

3. Classification of data

4. Tabulation of data

5. Data diagrams

Data Collection, Processing and Analysis

Acquiring data: Acquisition involves collecting or adding to the data holdings. There are
several methods of acquiring data:
1. Collecting new data

2. Using your own previously collected data

3. Reusing/Reprocessing someone others data

4. Purchasing/Getting data

5. Acquiring from Internet (texts, social media, photos)


Data processing: A series of actions or steps performed on data to verify, organize, transform, integrate,
and extract data in an appropriate output form for subsequent use. Methods of processing must be
rigorously documented to ensure the utility and integrity of the data.
Data Analysis involves actions and methods performed on data that help describe facts, detect patterns,
develop explanations and test hypotheses. This includes data quality assurance, statistical data analysis,
modeling, and interpretation of results.
Results: The results of above mentioned actions are published as a research paper. In case the research
data is made accessible, one has to prepare the data set for opening up.

2
By: Aleka J.
Jimma University

DATA PROCESSING
Data processing occurs when data is collected and translated into usable information. Data processing
starts with data in its raw form and converts it into a more readable format (graphs, documents, etc.),
giving it the form and context necessary to be interpreted by computers and utilized by employees
throughout an organization.
Six stages of data processing
1. Data collection
Collecting data is the first step in data processing. Data is pulled from available sources, including data
lakes and data warehouses. It is important that the data sources available are trustworthy and well-built so
the data collected (and later used as information) is of the highest possible quality.

2. Data preparation
Once the data is collected, it then enters the data preparation stage. Data preparation, often referred to as
“pre-processing” is the stage at which raw data is cleaned up and organized for the following stage of
data processing. During preparation, raw data is diligently checked for any errors.
3. Data input
The clean data is then entered into its destination and translated into a language that it can understand.
Data input is the first stage in which raw data begins to take the form of usable information.
4. Processing
During this stage, the data inputted to the computer in the previous stage is actually processed for
interpretation. Processing is done using machine learning algorithms, though the process itself may
vary slightly depending on the source of data being processed (data lakes, social networks, connected
devices etc.) and its intended use (examining advertising patterns, medical diagnosis from connected
devices, determining customer needs, etc.).
5. Data output/interpretation
The output/interpretation stage is the stage at which data is finally usable to non-data scientists. It is
translated, readable, and often in the form of graphs, videos, images, plain text, etc.). projects.
6. Data storage and Report Writing
The final stage of data processing is storage. After all of the data is processed, it is then stored for future
use. While some information may be put to use immediately, much of it will serve a purpose later on.

3
By: Aleka J.
Jimma University

DATA ANALYSIS TOOLS


Data analysis tools make it easier for users to process and manipulate data, analyze the relationships
and correlations between data sets, and it also helps to identify patterns and trends for interpretation.
Here is a complete list of tools.
Types of Data Analysis: Techniques and Methods
There are several types of Data Analysis techniques: However, the major ones are:

ü Text Analysis
ü Statistical Analysis
ü Diagnostic Analysis
ü Predictive Analysis
ü Prescriptive Analysis
ü Text Analysis
Text Analysis is also referred to as Data Mining. It is a method to discover a pattern in large data sets
using databases or data mining tools. It used to transform raw data into business information. Business
Intelligence tools are present in the market which is used to take strategic business decisions. Overall it
offers a way to extract and examine data and deriving patterns and finally interpretation of the data.
Statistical Analysis -Statistical Analysis shows "What happen?" by using past data in the form of
dashboards. Statistical Analysis includes collection, Analysis, interpretation, presentation, and
modelling of data. It analyses a set of data or a sample of data. There are two categories of this type of
Analysis - Descriptive Analysis and Inferential Analysis.
Descriptive Analysis - analyses complete data or a sample of summarized numerical data. It shows
mean and deviation for continuous data whereas percentage and frequency for categorical data.
Inferential Analysis =analyses sample from complete data. In this type of Analysis, you can find
different conclusions from the same data by selecting different samples.

Diagnostic Analysis - Diagnostic Analysis shows "Why did it happen?" by finding the cause from the
insight found in Statistical Analysis. This Analysis is useful to identify behaviour patterns of data. If a
new problem arrives in your business process, then you can look into this Analysis to find similar
patterns of that problem. And it may have chances to use similar prescriptions for the new problems.

4
By: Aleka J.
Jimma University

Predictive Analysis - Predictive Analysis shows "what is likely to happen" by using previous data.
Forecasting is just an estimate. Its accuracy is based on how much detailed information you have and
how much you dig in it.

Prescriptive Analysis - Prescriptive Analysis combines the insight from all previous Analysis to
determine which action to take in a current problem or decision. Based on current situations and
problems, they analyze the data and make decisions.

Quantitative Data Analysis:


Some of the methods that fall under that Quantitative Analysis are: Mean: Also known as the average,
Mean is the most basic method of analyzing data where the sum of a numbers’ list is divided by the
number of items on that list. It is useful in determining the overall trend of something.
Hypothesis Testing: Majorly used in business research and is done to assess if a certain theory or
hypothesis for a population or data set is true.
Sample Size Determination: When doing research on a large population like workforce for your
company, small sample size is taken and then analyzed, and the results are considered almost same for
every member of the population.

DATA ANALYSIS PROCESS


The Data Analysis Process is gathering information by using a proper application or tool which allows
you to explore the data and find a pattern in it. Based on that information and data, you can make
decisions, or you can get ultimate conclusions.
Data Analysis consists of the following phases:
1. Data Requirement Gathering

2. Data Collection

3. Data Cleaning

4. Data Analysis

5. Data Interpretation

6. Data Visualization
Data Requirement Gathering - First of all, you have to think about why do you want to do this data
analysis? All you need to find out the purpose or aim of doing the Analysis. You have todecide
which type of data analysis you wanted to do! In this phase, you have to decide what to analyze and
5
By: Aleka J.
Jimma University

how to measure it, you have to understand why you are investigating and what measures you have to use
to do this Analysis.
Data Collection - After requirement gathering, you will get a clear idea about what things you have to
measure and what should be your findings. Now it's time to collect your data based on requirements. Once
you collect your data, remember that the collected data must be processed or organized for Analysis. As
you collected data from various sources, you must have to keep a log with a collection date and source of
the data.

Data Cleaning - Now whatever data is collected may not be useful or irrelevant to your aim of Analysis,
hence it should be cleaned. The data which is collected may contain duplicate records, white spaces or
errors. The data should be cleaned and error free. This phase must be done before Analysis because based
on data cleaning; your output of Analysis will be closer to your expected outcome.
Data Analysis: Once the data is collected, cleaned, and processed, it is ready for Analysis. As you
manipulate data, you may find you have the exact information you need, or you might need to collect more
data. During this phase, you can use data analysis tools and software which will help you to understand,
interpret, and derive conclusions based on the requirements.
Data Interpretation - After analyzing your data, it's finally time to interpret your results. You can choose
the way to express or communicate your data analysis either you can use simply in words or maybe a table
or chart. Then use the results of your data analysis process to decide your best course of action.

Data Visualization
Data visualization is very common in your day to day life; they often appear in the form of charts and
graphs. In other words, data shown graphically so that it will be easier for the human brain to understand
and process it. Data visualization often used to discover unknown facts and trends. By observing
relationships and comparing datasets, you can find a way to find out meaningful information.

METHODS OF DATA PROCESSING IN RESEARCH


Data processing is that procedure in which research frame collected data through editing, coding,
classifying, tabulating, charting, and diagramming. The purpose of data processing in research is data

reduction or minimization.

6
By: Aleka J.
Jimma University

This processing transforms irrelevant data to relevant. Basically it works with 5 steps that are given below.

Validation - Covers five areas:


1. Fraud

2. Screening

3. Procedure

4. Completeness

5. Courtesy
EDITING OF DATA - Editing is the first step of data processing. Editing is the process of examine the
data collected through questionnaire or any other method. It starts after all data collection to check it or
reform into useful data.
1. Raw data is checked for mistakes made by either the interviewer or the respondent

2. By reviewing completed interviews from primary research, the researcher can check several
areas of concern:

3. Asking the proper questions

4. Accurate recording of answers

5. Correct screening of respondents

6. Complete and accurate recording of open-ended questions


Mildred B. Parten in his book points out that the editor is responsible for seeing that the data are;
1. Accurate as possible,

2. Consistent with other facts secured,

3. Uniformly entered,

4. As complete as possible,

5. Acceptable for tabulation and arranged to facilitate coding tabulation.


There are different types of editing. They are:

7
By: Aleka J.
Jimma University

1. Editing for quality asks the following questions: are the data forms complete, are the data free
of bias, are the recordings free of errors, are the inconsistencies in responses within limits, are
there evidences to show dishonesty of enumerators or interviewers and are there any wanton
manipulation of data.

2. Editing for tabulation does certain accepted modification to data or even rejecting certain pieces
of data in order to facilitate tabulation. or instance, extremely high or low value data item may be
ignored or bracketed with suitable class interval.
3. Field Editing is done by the enumerator. The schedule filled up by the enumerator or the
respondent might have some abbreviated writings, illegible writings and the like. These are
rectified by the enumerator. This should be done soon after the enumeration or interview before
the loss of memory. The field editing should not extend to giving some guess data to fill up
omissions.

4. Central Editing is done by the researcher after getting all schedules or questionnaires or forms
from the enumerators or respondents. Obvious errors can be corrected. For missed data or
information, the editor may substitute data or information by reviewing information provided by
likely placed other respondents. A definite inappropriate answer is removed and “no answer” is
entered when reasonable attempts to get the appropriate answer fail to produce results.
Editors must keep in view the following points while performing their work:
1. They should be familiar with instructions given to the interviewers and coders as well as with
the editing instructions supplied to them for the purpose,

2. While crossing out an original entry for one reason or another, they should just draw a single
line on it so that the same may remain legible,

3. They must make entries (if any) on the form in some distinctive color and that too in a
standardized form,

4. They should initial all answers which they change or supply,

5. Editor’s initials and the data of editing should be placed on each completed form or schedule.
CODING OF DATA - Coding is the process of categories data according to research subject or
topic and the design of research. In coding process researcher set a code for a particular things like
male - M, Female- F that indicate the gender in questionnaire without writing full spelling same as

8
By: Aleka J.
Jimma University

researcher can be use colours to highlight something or numbers like 1+, 1-. This type of coding
makes easy to calculate or evaluate result in tabulation.
1. Grouping and assigning values to various responses from the survey instrument

2. Codes are numerical

3. Can be tedious if certain issues are not addressed prior to collecting the data
Four-step process to develop codes for responses:
1. Generate a list of as many potential responses as possible

2. Consolidate responses

3. Assign a numerical value as a code

4. Assign a coded value to each response

CLASSIFICATION OF DATA - Classification or categorization is the process of grouping the


statistical data under various understandable homogeneous groups for the purpose of convenient
interpretation. A uniformity of attributes is the basic criterion for classification; and the grouping
of data is made according to similarity. Classification becomes necessary when there is a diversity
in the data collected for meaningless for meaningful presentation and analysis. However, it is
meaningless in respect of homogeneous data. A good classification should have the characteristics
of clarity, homogeneity, equality of scale, purposefulness and accuracy.

Objectives of Classification are below:


1. The complex scattered and haphazard data is organized into concise, logical and intelligible
form.

2. It is possible to make the characteristics of similarities and dis – similarities clear.

3. Comparative studies is possible.

4. Understanding of the significance is made easier and thereby good deal of human energy is
saved.

5. Underlying unity amongst different items is made clear and expressed.

6. Data is so arranged that analysis and generalization becomes possible.

9
By: Aleka J.
Jimma University

TABULATION OF DATA - Tabulation is the process of summarizing raw data and displaying it
in compact form for further analysis. Therefore, preparing tables is a very important step.
Researcher can be tabulation by hand or in digital mode. The choice is made largely on the basis
of the size and type of study, alternative costs, time pressures, and the availability of computers,
and computer programmes. If the number of questionnaire is small, and their length short, hand
tabulation is quite satisfactory. The counting the number of observations (cases) that are classified
into certain categories

1. One-way tabulation: Categorization of single variables existing in a study

2. Cross-tabulation: Simultaneously treating two or more variables in the study

3. Categorizing the number of respondents who have answered two or more questions
consecutively
Table may be divided into: (i) Frequency tables, (ii) Response tables, (iii) Contingency tables, (iv)
Uni-variate tables, (v) Bi-variate tables, (vi) Statistical table and (vii) Time series tables.
Generally a research table has the following parts:
(a) Table number
(b) Title of the table
(c) Caption
(d) Stub (row heading)
(e) Body
(f) Head note
(g) Foot note

As a general rule the following steps are necessary in the preparation of table:
Title of table: The table should be first given a brief, simple and clear title which may express the
basis of classification.
Columns and rows: Each table should be prepared in just adequate number of columns and rows.
Captions and stubs: The columns and rows should be given simple and clear captions and stubs.
Ruling: Columns and rows should be divided by means of thin or thick rulings.
Arrangement of items; Comparable figures should be arranged side by side.
Deviations: These should be arranged in the column near the original data so that their presence
may easily be noted.
Size of columns: This should be according to the requirement.
10
By: Aleka J.
Jimma University

Arrangements of items: This should be according to the problem.


Special emphasis: This can be done by writing important data in bold or special letters.
Unit of measurement: The unit should be noted below the lines.
Approximation: This should also be noted below the title.
Foot notes: These may be given below the table.
Total: Totals of each column and grand total should be in one line.
Source: Source of data must be given. For primary data, write primary data.

DATA DIAGRAMS - Diagrams are charts and graphs used to present data. These facilitate getting
the attention of the reader more. These help presenting data more effectively. Creative presentation of
data is possible. The data diagrams classified into:
1. Charts: A chart is a diagrammatic form of data presentation. Bar charts, rectangles, squares and
circles can be used to present data. Bar charts are uni-dimensional, while rectangular, squares and
circles are two-dimensional.

2. Graphs: The method of presenting numerical data in visual form is called graph, A graph gives
relationship between two variables by means of either a curve or a straight line. Graphs may be
divided into two categories. (1) Graphs of Time Series, and (2) Graphs of Frequency Distribution. In
graphs of time series one of the factors is time and other or others is / are the study factors. Graphs on
frequency show the distribution of by income, age, etc. of executives and so on.
Problems in Processing of data:
The problem concerning “Don’t know” (or DK) responses: While processing the data, the
researcher often comes across some responses that are difficult to handle. One category of such
responses may be ‘Don’t Know Response’ or simply DK response. When the DK response group
is small, it is of little significance. But when it is relatively big, it becomes a matter of major
concern in which case the question arises: Is the question which elicited DK response useless? The
answer depends on two point’s viz., the respondent actually may not know the answer or the
researcher may fail in obtaining the appropriate information. In the first case the concerned
question is said to be alright and DK response is taken as legitimate DK response. But in the
second case, DK response is more likely to be a failure of the questioning process.

APPLICATIONS IN THE RESEARCH PROCESS


The following order concerning various steps provides a useful procedural guideline regarding the
research process:

11
By: Aleka J.
Jimma University

(1) Formulating the research problem;

12
By: Aleka J.
Jimma University

(2) Extensive literature survey;


(3) Developing the hypothesis;
(4) Preparing the research design;
(5) Determining sample design;
(6) Collecting the data;
(7) Execution of the project;
(8) Analysis of data;
(9) Hypothesis testing;
(10) Generalisations and interpretation, and
(11) Preparation of the report or presentation of the results

There are five major phases of the research process. They are:
1. Conceptual phase

2. Design and planning phase

3. Data collection phase


4. Data Analysis phase and
5. Research Publication phase

ANALYSIS OF DATA
Definition
According to LeCompte and Schensul, research data analysis is a process used by researchers for
reducing data to a story and interpreting it to derive insights. The data analysis process helps in
reducing a large chunk of data into smaller fragments, which makes sense.
Types of data in research
Data can be in different forms; here are the primary data types.
Qualitative data: When the data presented has words and descriptions, then we call it qualitative
data. Although you can observe this data, it is subjective and harder to analyze data in research,
especially for comparison. Example: Quality data represents everything describing taste,
experience, texture, or an opinion that is considered quality data. This type of data is usually
collected through focus groups, personal interviews, or using open-ended questions in surveys.

Quantitative data: Any data expressed in numbers of numerical figures are called quantitative
data. This type of data can be distinguished into categories, grouped, measured, calculated, or
13
By: Aleka J.
Jimma University

ranked. Example: questions such as age, rank, cost, length, weight, scores, etc. everything comes

14
By: Aleka J.
Jimma University

under this type of data. You can present such data in graphical format, charts, or apply statistical
analysis methods to this data. The (Outcomes Measurement Systems) OMS questionnaires in
surveys are a significant source of collecting numeric data.
Categorical data: It is data presented in groups. However, an item included in the categorical data
cannot belong to more than one group. Example: A person responding to a survey by telling his
living style, marital status, smoking habit, or drinking habit comes under the categorical data. A
chi-square test is a standard method used to analyze this data.
Data Analysis Techniques
There are different techniques for Data Analysis depending upon the question at hand, the type of
data, and the amount of data gathered. Each focuses on strategies of taking onto the new data,
mining insights, and drilling down into the information to transform facts and figures into decision
making parameters.
INTERPRETATION OF RESULTS
Data interpretation refers to the implementation of processes through which data is reviewed for
the purpose of arriving at an informed conclusion. The interpretation of data assigns a meaning to
the information analyzed and determines its signification and implications.
The task of interpretation has two major aspects:
1. The effort to establish continuity in research through linking the results of a given study with
those of another, and

2. The establishment of some explanatory concepts. “In one sense, interpretation is concerned with
relationships within the collected data, partially overlapping analysis.

3. Interpretation also extends beyond the data of the study to include the results of other research,
theory and hypotheses.

Yet, before any serious data interpretation inquiry can begin, it should be understood that visual
presentations of data findings are irrelevant unless a sound decision is made regarding scales of
measurement. Before any serious data analysis can begin, the scale of measurement must be
decided for the data as this will have a long-term impact on data interpretation ROI. The varying
scales include:
Nominal Scale: non-numeric categories that cannot be ranked or compared quantitatively.
Variables are exclusive and exhaustive.

15
By: Aleka J.
Jimma University

Ordinal Scale: exclusive categories that are exclusive and exhaustive but with a logical order.
Quality ratings and agreement ratings are examples of ordinal scales (i.e., good, very good, fair,
etc., OR agree, strongly agree, disagree, etc.).
Interval: a measurement scale where data is grouped into categories with orderly and equal
distances between the categories. There is always an arbitrary zero point.
Ratio: contains features of all three.
For a more in-depth review of scales of measurement, read our article on data analysis questions.
Once scales of measurement have been selected, it is time to select which of the two broad
interpretation processes will best suit your data needs. Let’s take a closer look at those specific
data interpretation methods and possible data interpretation problems.

How to Interpret Data?


When interpreting data, an analyst must try to discern the differences between correlation,
causation and coincidences, as well as many other bias – but he also has to consider all the factors
involved that may have led to a result. There are various data interpretation methods one can use.

The interpretation of data is designed to help people make sense of numerical data that has been
collected, analyzed and presented. Having a baseline method (or methods) for interpreting data
will provide your analyst teams a structure and consistent foundation. Indeed, if several
departments have different approaches to interpret the same data, while sharing the same goals,
some mismatched objectives can result. Disparate methods will lead to duplicated efforts,
inconsistent solutions, wasted energy and inevitably time and money. In this part, we will look at
the two main methods of interpretation of data: with a qualitative and a quantitative analysis.
Qualitative Data Interpretation
Qualitative data analysis can be summed up in one word – categorical. With qualitative analysis,
data is not described through numerical values or patterns, but through the use of descriptive
context (i.e., text). Typically, narrative data is gathered by employing a wide variety of person-to-
person techniques. These techniques include:

Observations: detailing behavioral patterns that occur within an observation group. These
patterns could be the amount of time spent in an activity, the type of activity and the method of
communication employed.
Documents: much like how patterns of behavior can be observed, different types of
documentation resources can be coded and divided based on the type of material they contain.
16
By: Aleka J.
Jimma University

Interviews: one of the best collection methods for narrative data. Enquiry responses can be
grouped by theme, topic or category. The interview approach allows for highly-focused data
segmentation.
A key difference between qualitative and quantitative analysis is clearly noticeable in the
interpretation stage. Qualitative data, as it is widely open to interpretation, must be “coded” so as
to facilitate the grouping and labelling of data into identifiable themes. As person-to-person data
collection techniques can often result in disputes pertaining to proper analysis, qualitative data
analysis is often summarized through three basic principles: notice things, collect things, think
about things.

Quantitative Data Interpretation


If quantitative data interpretation could be summed up in one word (and it really can’t) that word
would be “numerical.” There are few certainties when it comes to data analysis, but you can be
sure that if the research you are engaging in has no numbers involved, it is not quantitative
research.

Quantitative analysis refers to a set of processes by which numerical data is analyzed. More often
than not, it involves the use of statistical modeling such as standard deviation, mean and median.
Let’s quickly review the most common statistical terms:
Mean: a mean represents a numerical average for a set of responses. When dealing with a data set
(or multiple data sets), a mean will represent a central value of a specific set of numbers. It is the
sum of the values divided by the number of values within the data set. Other terms that can be
used to describe the concept are arithmetic mean, average and mathematical expectation.
Standard deviation: this is another statistical term commonly appearing in quantitative analysis.
Standard deviation reveals the distribution of the responses around the mean. It describes the
degree of consistency within the responses; together with the mean, it provides insight into data
sets.
Frequency distribution: this is a measurement gauging the rate of a response appearance within a
data set. When using a survey, for example, frequency distribution has the capability of
determining the number of times a specific ordinal scale response appears (i.e., agree, strongly
agree, disagree, etc.). Frequency distribution is extremely keen in determining the degree of
consensus among data points.
Typically, quantitative data is measured by visually presenting correlation tests between two or

17
By: Aleka J.
Jimma University

more variables of significance. Different processes can be used together or separately, and

18
By: Aleka J.
Jimma University

comparisons can be made to ultimately arrive at a conclusion. Other signature interpretation


processes of quantitative data include:

Regression analysis
Cohort analysis
Predictive and prescriptive analysis
Now that we have seen how to interpret data, let's move on and ask ourselves some questions:
what are some data interpretation benefits? Why do all industries engage in data research and
analysis? These are basic questions, but that often don’t receive adequate attention.

7.3. Concept of Economic Modelling


An economic model is a simplified description of reality, designed to yield hypotheses about
economic behaviour that can be tested. An important feature of an economic model is that it is
necessarily subjective in design because there are no objective measures of economic outcomes.

Economic modelling is at the heart of economic theory. Modelling provides a logical, abstract
template to help organize the analyst's thoughts. The model helps the economist logically isolate
and sort out complicated chains of cause and effect and influence between the numerous
interacting elements in an economy. Through the use of a model, the economist can experiment, at
least logically, producing different scenarios, attempting to evaluate the effect of alternative policy
options, or weighing the logical integrity of arguments presented in prose. Certain types of models
are extremely useful for presenting visually the essence of economic arguments.

Types of Models

There are various types of models used in economic analysis. Some of these models area: visual
models, mathematical models, empirical models, and simulation models. Their primary features
and differences are discussed below.

Visual Models

Visual models are simply pictures of an abstract economy; graphs with lines and curves that tell an
economic story. They are primarily used in textbooks and teaching, and the reader who has had
any exposure to economics at all has probably seen dozens, if not hundreds of them. Some visual
models are merely diagrammatic, such as those which show the flow of income through the
economy from one sector to another.
19
By: Aleka J.
Jimma University

Mathematical Models

The most formal and abstract of the economic models are the purely mathematical models. These
are systems of simultaneous equations with an equal or greater number of economic variables.
Some of these models can be quite large. Even the smallest will have five or six equations and as
many unknown variables. The manipulation and use of these models require a good knowledge of
algebra or calculus.

Empirical Models

Empirical models are mathematical models designed to be used with data. The fundamental model
is mathematical, exactly as described above. With an empirical model, however, data is gathered
for the variables, and using accepted statistical techniques, the data are used to provide estimates
of the model's values.

Simulation Models

Simulation models, which must be used with computers, embody the very best features of
mathematical models without requiring that the user be proficient in mathematics. The models are
fundamentally mathematical (the equations of the model are programmed in a programming
language like Pascal or C++) but the mathematical complexity is transparent to the user. The
simulation model usually starts with initial or "default" values assigned by the program or the user,
then certain variables are changed or initialized, then a computer simulation is done. The
simulation, of course, is a solution of the model's equations.

Static and Dynamic Models

Most of the models used in economics are comparative statics models. Some of the more
sophisticated models in macroeconomics and business cycle analysis are dynamic models. There
are some fundamental differences between these models and how they are used.

In a comparative statics economic model, each equilibrium solution is like a snapshot of the
economy at one point in time.

Dynamic Models

Dynamic models, in contrast, directly incorporate time into their structure. This is usually done in
economic modelling by using mathematical systems of difference or differential equations.
20
By: Aleka J.
Jimma University

Dynamic models, when they can be used, sometimes better represent the subtleties of business
cycles, because certainly lags in behavioural response and timing strongly shape the character of a
cycle. For example, if there is a delay between the time income is received and when it is spent, a
model that can capture the delay is likely to have higher integrity than a model that cannot.

Expectations-Enhanced Models

Economic models often incorporate economic expectations, such as inflationary expectations.


Such models are called expectations-enhanced models. Generally, expectations-enhanced models
include one or more variables based upon economic expectations about future values. Expectations
matter because they have such a profound impact upon economic behaviour.

7.4. Testing hypothesis: Falsification as a scientific approach


Testing hypothesis

Hypothesis is a proposition (a working assumption) about the relationship between two or more
variables. It is a statement of specific expectations or intelligent guesses about the population
involved. Hypotheses are derived from the observations and relationships accepted as facts in the
statement of the problem. The main purpose is to give an indication of relationships between
variables. Hypotheses are important bridges between empirical inquiry and theory.

The basis for correct formulation of hypotheses is the knowledge of the researcher. The broader
the experience of the researcher in relating theory to applied problems, the more efficient he will
be in formulating appropriate hypotheses. Such assumptions or propositions, which may or may
not be true, are called statistical hypotheses. In many instances, we formulate a statistical
hypothesis for the sole purpose of rejecting or nullifying it, such hypotheses are called null
hypotheses and are denoted by Ho.

Following from the above there are two kinds of hypotheses are available:

The Null Hypotheses: is a non-directional statement of condition between variables. It states that
there is no significant difference between two parameters. The null hypothesis asserts that
observed differences or relationships merely result from chance errors inherent in the sampling
process.

The Alternative/Research hypotheses: It is a directional statement of a relationship between


variables. It states that there is significant difference between two parameters.

21
By: Aleka J.
Jimma University

Hypotheses are usually stated in the null form or in the alternative form. The logic provided by is
very appealing. The verification of one consequence of a positive hypothesis does not prove it to
be true. Observed consequences that may be consistent with a positive hypothesis may also be
compatible with equally plausible but competing hypotheses. The experimenter uses a statistical
test to discount chance or sampling error as an explanation for the difference. If the difference
between means is not great enough to reject the null hypothesis, then he accepts it. The researcher
concludes that there is no significant difference and that chance or sampling error probably
accounted for the apparent difference. Procedures which enable us to decide whether to accept or
reject hypotheses or to determine whether observed samples differ significantly from expected
results are called tests of hypotheses or tests of significance.

Type I and Type II errors

It we reject a hypothesis when it should be accepted we say that a type I error has been made. If
on the other hand, we accept a hypothesis when it should be rejected we say that a type II error has
been made. In either case a wrong decision or error in judgment has occurred.

Level of significance:

In testing a given hypothesis the maximum probability with which we would be willing to risk a
Type I error is called the level of significance. In practice a level of 0.05 or 0.01 is customary. If
we choose say 0.05 or 5% level of significance it means that these can be 5 chances in 100 that we
would reject the hypothesis when it should be accepted, i.e., we are about 95% confident that we
have made the right decision. In such case we say that the hypothesis has been rejected at 0.05
level of significance.

Characteristics of Hypotheses

Hypotheses must:

· Be specific.
· Be conceptually clear in terms of common definitions.
· Be testable by available techniques.
· Be related to a body of theory.
· Be capable of verification or rejection within the limits of the research resources.
· Be stated to provide direction for the research.
· Be formulated as causal relationships with If-then implication.

22
By: Aleka J.
Jimma University

· As a group be adequate and efficient in suggesting means to one or more meaningful


solutions to the problem.

Theory Vs hypothesis

A hypothesis is a proposition derived from a theory by deductive inference and which permits a
test of empirical confirmation. Thus:

· A theory is a generalized statement or proposition which explains or interrelates a set of other

more specific propositions set of propositions consisting of defined and interrelated constructs.

· A theory sets out the interrelations among a set of variables and in so doing presents a systematic
view of the phenomenon described by the variables.

· A theory explains phenomenon. A theoretical explanation implies prediction, i,e. a theory


establishes cause-effect relationship between variables with the purpose of explaining and
predicting phenomena.

Falsification as a scientific approach

Testing of the null hypothesis is a fundamental aspect of the scientific method and has its basis in
the falsification theory of Karl Popper. Null hypothesis testing makes use of deductive reasoning
to ensure that the truth of conclusions is irrefutable. In contrast, attempting to demonstrate the new
facts on the basis of testing the experimental or research hypothesis makes use of inductive
reasoning and is prone to the problem of the Uniformity of Nature assumption described by David
Hume in the eighteenth century.

Despite this issue and the well documented solution provided by Popper's falsification theory, the
majority of publications are still written such that they suggest the research hypothesis is being
tested. This is contrary to accepted scientific convention and possibly highlights a poor
understanding of the application of conventional significance-based data analysis approaches. Our
work should remain driven by conjecture and attempted falsification such that it is always the null
hypothesis that is tested.

A type I error is when the null hypothesis is rejected, but it is true.

A type II error is not rejecting H0 when H0 is false.

This is very similar in spirit to our diagnostic test examples


23
By: Aleka J.
Jimma University
False negative

test = type I error

False positive

test = type II

error

A type I error is when the null hypothesis is

rejected, but it is true. A type II error is not rejecting

H0 when H0 is false.

This is very similar in spirit to our diagnostic

test examples False negative test = type I error

False positive test = type II error

24
By: Aleka J.

You might also like