Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
28 views51 pages

Short Answer Theory

Entrepreneurship development 2 Marks question answer

Uploaded by

neshwanezrin718
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views51 pages

Short Answer Theory

Entrepreneurship development 2 Marks question answer

Uploaded by

neshwanezrin718
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Quantitative Methods for Economic Analysis I

Short Answer
Theory Questions and Answers
Module I
• Equation: An equation is a mathematical statement asserting
that two expressions are equal. It typically contains variables,
constants, and mathematical operations.
• Quadratic Equation: A quadratic equation is a second-degree
polynomial equation of the form 𝑎𝑥 2 + 𝑏𝑥 + 𝑐 = 0, where x is
the variable, and a, b, and c are constants. The solutions to the
quadratic equation are given by the quadratic formula.
• Function: A function is a mathematical relationship between
two sets of values, where each input value (independent
variable) corresponds to exactly one output value (dependent
variable). It is often denoted as f(x), where x is the input
variable. Example: f(x)=2x+1
• Polynomial Function: A polynomial function is a function that
is defined by a polynomial expression. A polynomial is the sum
of one or more terms, each term being the product of a constant
coefficient and an independent variable raised to a non-negative
integer exponent.
3𝑥 2 + 2𝑥 + 10
• Variable and Constant:
Variable: A variable is a symbol (such as x) that represents an
unknown or arbitrary number. It is used in mathematical
expressions and equations.
Constant: A constant is a fixed value or number that does not
change. It is a term without a variable component.
• Linear and Non-Linear Equation
Linear Equation: A linear equation is an equation of the first
degree, meaning the highest power of the variable is 1. It has the
form ax+b=0.
Non-Linear Equation: A non-linear equation is an equation with
a degree higher than 1. For example, is a non-linear equation.
• Linear and Non-Linear Function: Similar to equations, a
linear function is a function of the first degree, and a non-linear
function has a degree higher than 1.
F(x)=2x+1-Linear 𝑓(𝑥 ) = 3𝑥 2 + 2𝑥 -Non Linear
• Quadratic Function: A quadratic function is a type of
polynomial function of degree 2. 3𝑥 2 + 2𝑥 + 3
• Explicit and Implicit Function:
An explicit function is one where the dependent variable is
expressed solely in terms of the independent variable.
An implicit function is one where the relationship between
variables is not directly expressed.
Example:
y=2x+1 is explicit;
2x+2y =1 is implicit
Quantitative Methods for Economic Analysis I
Short Answer
Theory Questions and Answers
Module II
MATRICES
• A matrix is defined as a rectangular array of
numbers, parameters or variables.
• Each of which has a carefully ordered place within
the matrix. The members of the array are referred
to as “elements” of the matrix and are usually
enclosed in brackets,

𝒂𝟏𝟏 𝒂𝟏𝟐 𝒂𝟏𝟑


𝑨 = 𝒂𝟐𝟏 𝒂𝟐𝟐 𝒂𝟐𝟑
𝒂𝟑𝟏 𝒂𝟑𝟐 𝒂𝟑𝟑
Order of the matrix
• The members in the horizontal line are called rows and
members in the vertical line are called columns.
• The number of rows and the number of columns together
define the dimension or order of the matrix.
• If a matrix contains ‘m’ rows and ‘n’ columns, it is said to
be of dimension m x n (read as ‘).
• The row number precedes the column number. In that
sense the above matrix is of dimension 3 x 3.
Square Matrix
• A matrix with equal number of rows and columns is
called a square matrix. Thus, it is a special case where
m=n.
Row matrix or Row Vector
• A matrix having only one row is called row vector of
row matrix. The row vector will have a dimension of
1×0.

𝑨= 𝟏 𝟑 𝟒
Column matrix or Column Vector
• A matrix having only one column is called column
vector or column matrix. The column vector will
have a dimension of m 1 .
𝟏
•𝑨 = 𝟓
𝟐
Diagonal Matrix
• A square matrix in which all elements except those in
diagonal are zero are called diagonal matrix.
Identity matrix or Unit Matrix
• A diagonal matrix in which each of the diagonal elements
is unity is said to be unit matrix and denoted by I.
• The identity matrix is similar to the number one in algebra
since multiplication of a matrix by an identity matrix leaves
the original matrix unchanged. That is, AI = I A =A
Null Matrix or Zero Matrix
• A matrix in which every element is zero is called null
matrix or zero matrix.
• It is not necessarily square. Addition or subtraction of the
null matrix leaves the original matrix unchanged and
multiplication by a null matrix produces a null matrix.
Triangular Matrix
• If every element above or below the leading diagonal is
zero, the matrix is called a triangular matrix.
Triangular matrix may be upper triangular or lower
triangular.
• In the upper triangular matrix, all elements below the
leading diagonal are zero, like

• In the lower triangular matrix, all elements above leading


diagonal are zero like
Idempotent Matrix
• A square matrix A is said to be
idempotent if 𝑨 = 𝑨𝟐
Transpose of a Matrix
• Transpose of a matrix is obtained by interchanging
rows into columns or columns to rows.
• The transpose of the matrix is denoted by using the
letter “T” in the superscript of the given matrix.
• For example, if “A” is the given matrix, then the
transpose of the matrix is represented by A’ or AT.
Transpose of a Matrix
Properties of Transpose of a Matrix

• (𝑨 + 𝑩)𝑻 = 𝑨𝑻 + 𝑩𝑻
• (𝑨𝑻 )𝑻 = 𝑨
• (𝒌𝑨)𝑻 = 𝒌𝑨𝑻
• (𝑨𝑩)𝑻 = 𝑨𝑻 𝑩𝑻
Symmetric Matrix & Skew Symmetric
Matrix

• A symmetric matrix and skew-symmetric matrix


both are square matrices.
• But the difference between them is, the symmetric
matrix is equal to its transpose whereas skew-
symmetric matrix is a matrix whose transpose is
equal to its negative.
• If A is a symmetric matrix, then A = AT and if A is a
skew-symmetric matrix then AT = – A.
Symmetric Matrix & Skew Symmetric
Matrix
Matrix Addition
Properties
• Matrix addition is commutative A+B=B+A
• Matrix addition is associative (A+B)+C=A+(B+C)
• Additive identity property. A+O = A
• Additive inverse property. When we add a unique
matrix –A to A, A+ (-A) = O.
• Closure Property of addition A + B = C, where C
is a matrix of the same dimensions as A and B.
Matrix Subtraction
Properties
• The number of rows and columns should be the same
for the subtraction of matrices.
• The subtraction of matrices is not commutative, that
is, A - B ≠ B - A
• The subtraction of matrices is not associative, that is,
(A - B) - C ≠ A - (B - C)
• The matrix subtraction from itself results in a null
matrix, that is, A - A = O.
• Subtraction of matrices is the addition of the negative
of a matrix to another matrix, that is, A - B = A + (-B).
Matrix Multiplication
Properties
(a) Matrix multiplication is not commutative in general
AB ≠ BA.
(b) Matrix multiplication is distributive over matrix addition
(A + B)C = AC + BC
(c) Matrix multiplication is always associative
(AB)C = A(BC)
Determinants
• The determinant is a single number or scalar associated
with a square matrix.
• Determinants are defined only for square matrix.
• Determinant denoted as ∣A∣, is a uniquely defined
number or scalar associated with that matrix.
• If 𝐴=[ 𝑎11 ] is a 1×1 matrix, then the determinant of A,
i.e., ∣A∣ is the number 𝑎11
Singular and Non-singular Matrix
• If the determinant is equal to zero, the
determinant is said to vanish and the matrix is
termed as singular matrix. That is, a singular
matrix is one in which there exists linear
dependence between at least two rows or
columns.
• If ∣A∣≠0, matrix A is non-singular and all its rows
and columns are linearly independent.
MINOR

• Every element of a square matrix has a minor.


• It is the value of the determinant formed with the
elements obtained when the row and the column in
which the element lies are deleted.
• Thus, a minor, denoted as is the determinant of the sub
matrix formed by deleting the ith row and jth column
of the matrix.
MINOR
COFACTOR

• A cofactor (cij) is a minor with a prescribed sign.


• Cofactor of an element is obtained by multiplying
the minor of the element with where i is the number
of row and j is the number of columns.
ADJOINT OF A MATRIX
• An adjoint matrix is transpose of a cofactor matrix
that is adjoint of a given square matrix is the transpose
of the matrix formed by cofactors of the elements of a
given square matrix taken in order.
Inverse of the Matrix
• If there is a non-singular square matrix A, then there is a
possibility for the A⁻¹ n x n matrix, which is called the inverse
matrix of A.

• AA⁻¹ = A⁻¹A = I, where I is called the Identity matrix.

𝑨𝒅𝒋 𝑨
• 𝑨−𝟏 = 𝑨 ≠𝟎
𝑨

• The inverse matrix can be found only with the square matrix. The
square matrix has to be non-singular, i.e, its determinant has to be
non-zero.
RANK OF MATRIX
• The maximum number of its linearly independent
columns (or rows ) of a matrix is called the rank of
a matrix. The rank of a matrix cannot exceed the
number of its rows or columns.
• If we consider a square matrix, the columns (rows)
are linearly independent only if the matrix is
nonsingular.
• In other words, the rank of any nonsingular matrix
of order m is m. The rank of a matrix A is denoted
by ρ(A).
RANK OF MATRIX
• The rank of a null matrix is zero. A null matrix has no non-
zero rows or columns. So, there are no independent rows or
columns. Hence the rank of a null matrix is zero.
• RANK OF MATRIX 2 x 2-Possible Ranks -2, 1 and 0
• RANK OF MATRIX 3 x 3-Possible Ranks – 3, 2, 1 and 0
Trace of a Matrix

• The trace of a matrix is the sum of the elements on its main


diagonal (the diagonal from the top left to the bottom right). It is
denoted by the symbol "Tr".
Quantitative Methods for Economic Analysis I
Short Answer
Theory Questions and Answers
Module III
• Univariate analysis involves the examination and interpretation of data with
respect to a single variable. It is a fundamental step in statistical analysis,
helping to understand the distribution and characteristics of individual
variables.
• Frequency Tables: A frequency table is a tabular representation of the
distribution of a single variable. It shows the frequency (count) of each
distinct value or group of values in the dataset.
Example:

Value Frequency
10 5
15 8
20 12
Representation of data
Frequency Polygon: A frequency polygon is a graph that displays the distribution
of a dataset. It is created by plotting the midpoints of each interval on the x-axis
against their corresponding frequencies on the y-axis and connecting these points
with straight line segments.
Ogives (Cumulative Frequency Curves):
An ogive is a graph that represents the cumulative frequencies of a dataset. It is
constructed by plotting the cumulative frequency of each class interval against the
upper class boundary on the x-axis.

Pie Diagram: A pie diagram (or pie chart) is a circular statistical graphic that is
divided into slices to illustrate numerical proportions. Each slice represents the
proportion of the whole dataset that corresponds to a specific category or class. The
size of each slice is proportional to the frequency or percentage of that category.

Arithmetic Mean: The arithmetic mean, often simply referred to as the mean, is a
measure of central tendency that represents the average of a set of values. It is
calculated by adding up all the values in a dataset and then dividing the sum by the
number of values.

Individual Series Discrete Series Continuous Series

∑𝑿 ∑𝑓𝑿 ∑𝑓𝑿
𝑨𝑴 = 𝑨𝑴 = 𝑨𝑴 =
𝒏 𝑁 𝑁

Mathematical Properties of AM

1. The sum of deviations of the items from their arithmetic mean is always
zero, i.e. ∑(x – X) = 0.

2. The sum of the squared deviations of the items from Arithmetic Mean
(A.M) is minimum, which is less than the sum of the squared deviations of
the items from any other values.

𝐍𝟏𝑿𝟏+𝐍𝟐𝑿𝟐
Combined Mean =
𝐍𝟏+𝐍𝟐
3.

MEDIAN

• Median is the central value of the variable that divide the series into two equal
parts in such a way that half of the items lie above the value and the
remaining half lie below this value. Median is defined as the value of the
middle item (or the mean of the values of the two middle items) when the
data are arranged in an ascending or descending order of magnitude.

• (N+1)/2 th item

Mode
In statistics, the mode is the value that is repeatedly occurring in a given set.
We can also say that the value or number in a data set, which has a high
frequency or appears more frequently, is called mode or modal value. It is
one of the three measures of central tendency, apart from mean and median.
For example, the mode of the set {3, 7, 8, 8, 9}, is 8. Therefore, for a finite
number of observations, we can easily find the mode. A set of values may
have one mode or more than one mode or no mode at all.

Mode = 3 Median -2 Mean

HARMONIC MEAN

• The harmonic mean is a measure of central tendency that is calculated as the


reciprocal of the arithmetic mean of the reciprocals of a set of values.
𝑛
• Individual Series 𝐻𝑀 = 1
Σ
𝑥

𝑛
• Discrete and Continuous Series 𝐻𝑀 = 1
Σf
𝑥

The geometric mean is a measure of central tendency that is calculated by


multiplying together a set of values and then taking the nth root of the
product, where n is the number of values in the dataset.

MEASURES OF DISPERSION
• Dispersion is the state of getting dispersed or spread. Statistical dispersion
means the extent to which a numerical data is likely to vary about an average
value. In other words, dispersion helps to understand the distribution of the
data.

• The measures of dispersion help to interpret the variability of data i.e. to know
how much homogenous or heterogeneous the data is. In simple terms, it
shows how squeezed or scattered the variable is.

• Absolute Measure of Dispersion: The absolute measure of dispersion is a


statistical term that quantifies the extent to which individual data points in a
dataset deviate from the central tendency, typically the mean. It provides a
measure of the spread or variability within a set of data. Absolute measures of
dispersion are expressed in the same units as the original data.

a) Range:
b) Standard Deviation
c) Quartile Deviation
d) Mean Deviation

• Relative Measure of Dispersion: Relative measures of dispersion are


statistical measures that express the degree of variability in a dataset relative
to a reference point, often the mean or average. Unlike absolute measures,
relative measures of dispersion are dimensionless, making them useful for
comparing the variability of datasets with different units or scales.

1. Co-efficient of Range
2. Co-efficient of Variation
3. Co-efficient of Quartile Deviation
4. Co-efficient of Mean Deviation

RANGE

• It is the difference between Highest and lowest value

• Range = H-L

QUARTILE DEVIATION

• The Quartile Deviation can be defined mathematically as half of the


difference between the upper and lower quartile. Here, quartile deviation can
be represented as QD; Q3 denotes the upper quartile and Q1 indicates the
lower quartile.

• Quartile Deviation is also known as the Semi Interquartile range.

Mean Deviation

▪ MD is a measure of dispersion, which is known as the average deviation.


Mean deviation can be computed from the mean or median.
▪ Mean deviation is the arithmetic deviation of different items of central
tendency. It may be the mean or the median.
▪ Symbolically, mean deviation is defined as the following:
Σ|𝐷|
𝑀𝐷 = D=(x -Mean or x-Median)
𝑛
𝑀𝐷(𝑀𝑒𝑎𝑛 𝑜𝑟 𝑀𝑒𝑑𝑖𝑎𝑛)
𝐶𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑜𝑓 𝑀𝐷 =
𝑀𝑒𝑎𝑛 𝑜𝑟 𝑀𝑒𝑑𝑖𝑎𝑛

STANDARD DEVIATION

• Introduced by Karl Pearson in 1983

• While calculating SD we take deviations of individual observations from their


AM and then each square. The sum of the squares is divided by the Total
number of observations. The square root of this sum is known as standard
deviation.

• It is always calculated from the arithmetic mean, median and mode is not
considered.

∑(𝐱 − 𝑿)𝟐
𝑺𝑫 = 𝝈 = √
𝒏

▪ Variance is the measure of how notably a collection of data is spread out. If


all the data values are identical, then it indicates the variance is zero. All
non-zero variances are considered to be positive.
▪ A little variance represents that the data points are close to the mean, and to
each other, whereas if the data points are highly spread out from the mean
and from one another indicates the high variance.

▪ In short, the variance is defined as the average of the squared distance from
each point to the mean.

𝑽𝒂𝒓𝒊𝒂𝒏𝒄𝒆 = 𝝈 𝟐

Coefficient of Variation: The coefficient of variation is a widely used


relative measure of dispersion. It is expressed as a percentage and is
calculated as the ratio of the standard deviation to the mean, multiplied by
100 to get a percentage.

𝑆𝐷
𝐶𝑉 = 𝑥100
𝑀𝑒𝑎𝑛

Lorenz Curve

• Developed by Marx O Lorenz. Graphic method of studying variation


• Mostly used for study of distribution of income and wealth
• Study of the degree of inequality in the distribution of income and wealth
between countries or between different periods of time.
Gini Index

• The Gini index, or Gini coefficient, is a measure of the distribution of income


across a population developed by the Italian statistician Corrado Gini in
1912.
• It is often used as a gauge of economic inequality, measuring income
distribution or, less commonly, wealth distribution among a population.
• The coefficient ranges from 0 (or 0%) to 1 (or 100%), with 0 representing
perfect equality and 1 representing perfect inequality. Values over 1 are
theoretically possible due to negative income or wealth.
SKEWNESS

• Skewness means lack of symmetry in frequency distribution.

• It gives us idea about the shape of the frequency curve.

• When a distribution is not symmetrical it is called a skewed distribution.

• Skewness tells us about the asymmetry of the frequency distribution.

• The skewness for a normal distribution is zero, and any symmetric data should
have a skewness near zero.

1. Symmetrical distribution

• In a symmetrical distribution Skewness is not present.

• In this situation mean, median and mode are equal.


Positively Skewed Distribution: A positively skewed distribution, also known as a right-skewed
distribution, is a type of probability distribution where the tail on the right side (higher values) is longer or
fatter than the left side (lower values).

• Mean > Median > Mode

• In this case normal distribution is +vly skewed..

• Tail right side.

Negatively Skewed Distribution: A negatively skewed distribution, also known as a left-skewed


distribution, is a type of probability distribution where the tail on the left side (lower values) is longer or fatter
than the right side (higher values).

• Mean < Median < Mode


MEASURES OF KURTOSIS

• Like average, dispersion , skewness and Kurtosis is forth measure of


frequency distribution.

• Kurtosis gives idea about the shape of a frequency distribution.

• It refers to degree of flatness or peakedness of a frequency curve.

• Kurtosis indicates whether a frequency distribution is flat, normal and peaked


shape

1.Lepto- kurtic

• It is a curve having high peak than normal curve.

• Too much concentration the items near the center.

• A distribution with positive excess kurtosis is called leptokurtic,

• Examples of leptokurtic distributions include the Student's tdistribution,


exponential distribution, Poisson distribution and the logistic distribution.

• Such distributions are sometimes termed super-Gaussian.


2.Platy-kurtic

• It is a curve having low peak (flat) than the normal curve.

• There is less concentration of items near the center.

• A distribution with negative excess kurtosis is called platykurtic

• In terms of shape, a platykurtic distribution has thinner tails

3.Meso-kurtic

• It is a curve having normal peak or the normal curve.

• There is equal distribution of items around the central value.

• Distributions with zero excess kurtosis are called mesokurtic

• The most prominent example of a mesokurtic distribution is the normal


distribution .
Quantitative Methods for Economic Analysis I
Short Answer
Theory Questions and Answers
Module IV
• CORRELATION: Correlation is a LINEAR association between two
random variables Correlation analysis show us how to determine both the
nature and strength of relationship between two variables. Correlation lies
between +1 to -1. Correlation is a statistical technique which tells us if two
variables are related. For example, consider the variables family income and
family expenditure.

Positive and Negative Correlation

• A positive correlation is a relationship between two variables in which both


variables move in the same direction.
• Therefore, when one variable increases as the other variable increases, or one
variable decreases while the other decreases. An example of positive
correlation would be Law of Supply
• A negative correlationis a relationship between two variables in which an
increase in one variable is associated with a decrease in the other. An
example of negative correlation would be Law of Demand

Simple, Partial and Multiple Correlation

• Simple Correlation: In simple correlation, we study the relationship between


two Variables. For example, income and expenditure, price and demand etc.
• Partial Correlation: If in a given problem, more than two variables are
involved and of these variables we study the relationship between only two
variables keeping the other variables constant, correlation is said to be
partial. It is so because the effect of other variables is assumed to be constant
• Multiple Correlations: Under multiple correlations, the relationship between
two and more variables is studied jointly. For instance, relationship between
rainfall, use of fertilizer, manure on per hectare productivity of wheat crop.

Linear and Non Linear Correlation

• Non Linear (Curvilinear) Correlation: Correlation is said to be non linear if


the ratio of change is not constant. In other words, when all the points on the
scatter diagram tend to lie near a smooth curve, the correlation is said to be
non linear (curvilinear).
• Linear Correlation: Correlation is said to be linear if the ratio of change is
constant. In other words, when all the points on the scatter diagram tend to
lie near a line which looks like a straight line, the correlation is said to be
linear.
• COEFFICIENT OF CORRELATION: Correlation is measured by what is
called coefficient of correlation (r). A correlation coefficient is a statistical
measure of the degree to which changes to the value of one variable predict
change to the value of another. Correlation coefficients are expressed as
values between +1 and -1. Its numerical value gives us an indication of the
strength of relationship.

• r> 0 positive relationship


• r < 0 negative relationship
• r = 0 no relationship
• r = +1.0 Perfect positive correlation
• r = −1.0 Perfect negative correlation.
• SCATTER DIAGRAM: Scatter Diagram (also called scatter plot, X–Y
graph) is a graph that shows the relationship between two quantitative
variables measured on the same individual. Each individual in the data set is
represented by a point in the scatter diagram. The predictor variable is plotted
on the horizontal axis and the response variable is plotted on the vertical axis.
Do not connect the points when drawing a scatter diagram.
• CORRELATION GRAPH: Correlation Graph Under this method, separate
curves are drawn for the X variable and Y variable on the same graph paper.
The values of the variable are taken as ordinates of the points plotted. From
the direction and closeness of the two curves we can infer whether the
variables are related. If both the curves are move in the same direction
(upward or downward), correlation is said to be positive. If the curves are
moving in the opposite direction correlation is said to be negative.
• Karl Pearson’s Coefficient of Correlation (Pearson product-moment
correlation coefficient): Karl Pearson’s Product-Moment Correlation
Coefficient or simply Pearson’s Correlation Coefficient for short, is one of the
important methods used in Statistics to measure Correlation between two
variables. When measured in a population the Pearson Product Moment
correlation is designated by the Greek letter rho. When computed in a sample,
it is designated by the letter "r" and is sometimes called "Pearson's r."
Pearson's correlation reflects the degree of linear relationship between two
variables.

𝚺(𝒙 − 𝐱̄ )(𝐲 − 𝐲̄ )
𝒓=
𝒏𝝈𝒙𝝈𝒚

The above formula written as


• COEFFICIENT OF DETERMINATION : The convenient way of
interpreting the value of correlation coefficient is to use of square of
coefficient of correlation which is called Coefficient of Determination.

• The Coefficient of Determination = r2.


• Suppose: r = 0.9, r2 = 0.81 this would mean that 81% of the variation in the
dependent variable has been explained by the independent variable.
• The maximum value of r2 is 1 because it is possible to explain all of the
variation in y but it is not possible to explain more than all of it.
• SPEARMAN’S RANK CORRELATION COEFFICIENT: The
Spearman’s Rank Correlation Coefficient is the non-parametric statistical
measure used to study the strength of association between the two ranked
variables. This method is applied to the ordinal set of numbers, which can be
arranged in order, i.e. one after the other so that ranks can be given to each.

• In the rank correlation coefficient method, the ranks are given to each
individual on the basis of its quality or quantity, such as ranking starts from
position 1 st and goes till Nth position for the one ranked last in the group.

REGRESSION: Regression analysis, in general sense, means the estimation or


prediction of the unknown value of one variable from the known value of the other
variable. Prediction or estimation is one of the major problems in almost all the
spheres of human activity.
REGRESSION LINE

• A regression line summarizes the relationship between two variables in the


setting when one of the variables helps explain or predict the other.
• A regression line is a straight line that describes how a response variable y
changes as an explanatory variable x changes. A regression line is used to
predict the value of y for a given value of x. Regression, unlike correlation,
requires that we have an explanatory variable and a response variable.
• Regression line is the line which gives the best estimate of one variable from
the value of any other given variable.
• The regression line gives the average relationship between the two variables
in mathematical form.

REGRESSION EQUATION

• It is a algebraic expressions of regression lines.


• Since there are two regression lines, there are two regression equations.
• The regression equation of x on y is used to describe the variations in the
values of x for given changes in y
• The regression equation of y on x is used to describe the variations in the
values of y for given changes in x

Regression equation y on x

y-ȳ=byx(x-x̄)

𝑛∑xy − ∑x∑y
𝑏𝑦𝑥 =
𝑛∑𝑥 2 − (∑x)2
Regression equation x on y

x-x̄=bxy(y-ȳ)

𝒏∑𝐱𝐲 − ∑𝐱∑𝐲
𝒃𝒙𝒚 =
𝒏∑𝒚𝟐 − (∑𝐲)𝟐

Correlation and Regression

o Correlation does not imply causation. It only measures the association


between variables.
o Regression allows for the modeling of potential cause-and-effect
relationships.
o Correlation is not used for making predictions.
o Regression is used for making predictions based on the modeled
relationship.
o Correlation is more general and can be used when exploring
relationships between variables.
o Regression is specific to modeling the relationship between a dependent
variable and one or more independent variables.

You might also like