Course Code: MAIR 11
UNIT-2: Multivariable
Calculus
Contents
► Function of two variables
► Limits and Continuity
► Partial Derivatives
► Total Differential
► Taylor’s expansion for two variables
► Maxima and Minima
► Constrained Maxima and Minima
► Lagrange’s Multiplier Method
► Jacobians
Function of Two Variables
►
Function of Two Variables
►
Function of Two Variables
►
Limits of Function of Two Variables
►
Limits of Function of Two Variables
►
How to Find out Limits
► (1) To find the limit of a polynomial, we simply plug in the point.
(2) To find the limit of a rational function, we plug in the point as
long as the
denominator is not 0.
(3) Like for functions of one variable, the rules do not apply when
"plugging-
in" the point results in an indeterminate form. In that case, we
must use
techniques similar to the ones used for functions of one
variable. Such
techniques include factoring, multiplying by the conjugate.
(4) Taking the limit along a specific path: Make sure that the path
you select
Examples
1) Find the limit of the function 2) Find the limit of the function
Sol: The Limit of the function according Sol: The Limit of the function
according
to the rule (1) is given by- to the rule (2) is given by-
Examples
3) Find the limit of the function 4) Find the limit of the function
Sol: The Limit of the function according Sol: The Limit of the function
according
to the rule (3) is given by- to the rule (3) is given by-
Examples
Example (5): Find the limit of the function
Sol: The Limit of the function is find out according to the rule (4). we use
the equation of
the path y = mx to replace y with mx:
This means that different linear paths lead to different answers. The limit
depends on ‘m’, hence does not exist.
Examples
Example (6): Find the limit of the function
Sol: The Limit of the function is find out according to the rule (4). we use
the equation of
the path y = mx to replace y with mx:
Continue..
Continuity of Function of Two
►
Variables
Continuity of Function of Two
►
Variables
The following results are true for multivariable functions:
(1) The sum, difference and product of continuous functions is a
continuous
function.
(2) The quotient of two continuous functions is continuous as
long as the
denominator is not 0.
(3) Polynomial functions are continuous.
(4) Rational functions are continuous in their domain.
(5) If f (x, y) is continuous and g (x) is defined and continuous on
the range of
f, then g (f (x, y)) is also continuous.
Examples
1) Find out whether the following function is continuous at origin or not-
Sol: Away from (0, 0), f is a rational function always defined, So, it is
continuous. We
still need to investigate continuity at (0, 0).
On finding the limit of given function along the path y = mx, we find
that limit
depends on ‘m’, hence does not exist. Therefore, f is continuous
everywhere except
at (0, 0).
Examples
2) Find out whether the following function is continuous at origin or not-
Sol: Away from (0, 0), f is a rational function always defined, So, it is
continuous. We
still need to investigate continuity at (0, 0).
On finding the limit of given function along the path y = mx, we find that
limit
is equal to 0. Therefore, f is continuous at (0, 0). Hence f is continuous
everywhere.
► Remark: If the function of the example we just did had been defined such
that f (0, 0) = 1, then it would not have been continuous at (0, 0) since the
Partial Derivatives
►
Partial Derivatives
► There are quite a few commonly used notations for partial
derivatives.
► First order partial derivatives:
and
► Second order partial derivatives:
or
Partial Derivatives
► Clairaut’s theorem or equality of mixed partials theorem:
If the both type of mixed partial derivatives exist and are
continuous at (a, b),
then
► Example (1): Find, if possible a function f(x, y) obeying
Sol: No such function f(x, y) exists. If it were to exist, it would have
to obey
Partial Derivatives
► Example (2): Find all the first order partial derivatives of function
Sol: The first order partial derivatives are-
Partial Derivatives
► Example (3): Find all the second order partial derivatives of function
Sol: Then we differentiate
a second time the first
order partial derivatives
( see example (2)), in
all possible ways-
Total Differential
► Let z = f(x,y) be a function of two variables x and y, then total
differential of function is given by-
► If z = f(x,y) is differentiable at (a, b) then total differential of
function at point (a, b) is given by-
Total Differential
►
Total Differential
► Example(2): Find the total differential of following function
Sol: The total differential is given by-
Then-
Thus the total differential is given by-
Total Differential
► The Chain Rule:
Suppose that z = f(x, y), f is differentiable, x = g(t), and y = h(t).
Assuming that
the relevant derivatives exist, then
► Example(1):
Chain Rule
► Solution:
Chain Rule
Example(
2):
► Solution: By the chain rule-
Implicit Differentiation
►
Implicit Differentiation
► Example(1
):
Solution
: By using implicit differentiation formula,
we have-
► Example(2
):
Solution
: By using implicit differentiation formula,
we have-
Taylor’s Expansion for Two Variables
►
Continue..
►
Continue..
► The partial derivatives required are as follows:
Neglecting terms of degree higher than two, we have
Taylor’s Expansion for Two Variables
► Example(2): Determine the Taylor series expansion of following
function upto
third order
► Sol: In this problem, we have to find out the Taylor’s expansion of
function
about (0,0) i.e.
The function and its value at (0,0) is given by-
Continue..
The first order partial derivatives required are
as follows:
The second order partial derivatives required are as follows:
Continue..
The third order partial derivatives required are
as follows:
Neglecting terms of degree higher than three, we have
Maxima and Minima for Two
► Variables
When you were learning about derivatives about functions of one
variable, you learned some techniques for finding the maximum
and minimum values of functions of one variable. We’ll now extend
those techniques to functions of more than one variable. We’ll
concentrate on functions of two variables, though many of the
techniques work more generally.
► Local Maximum and Local Minimum of Function:-
(1) The point (a, b) is a local maximum of the function f(x, y) if there
is an r > 0
such that f(x, y) ≤ f(a, b) for all points (x, y) within a distance r of
(a, b).
(2) Similarly, (a, b) is a local minimum of the function f(x, y) if there
is an r > 0
such that f(x, y) ≥ f(a, b) for all points (x, y) within a distance r of
Maxima and Minima for Two
► Variables
Saddle point ( Minimax Point):
At this point, there will be points larger than f(a,b) and points
smaller than f(a,b).
So there cannot local extrema.
Maxima and Minima for Two
► Variables
Maxima and Minima for Two
► Variables
Example(2): Find all the stationary points of function
Sol: The first order partial derivatives are
So the stationary points are the solutions of equations-
i.e.
On solving these equations, there are four stationary points (5, 0),
(-5, 0), (3,4) and (-3, 4).
Maxima and Minima for Two
►
Variables
Maxima and Minima for Two
► Variables
Example(1): Find out the stationary points for the following function
and classify
them into maxima, minima and saddles.
Sol: The first and second derivatives are-
We get the stationary point by equating to zero the first order
derivatives and it is the only point (0, 0).
Continue..
Substituting (x, y) = (0, 0), we have
Hence, it has a local maximum at (0, 0).
Maxima and Minima for Two
► Variables
Example(2): Find out the stationary points for the following function
and classify
them into maxima, minima and saddles.
Sol: The first and second derivatives are-
We get the stationary point by equating to zero the first order
derivatives and it is the only point (0, 0).
Continue..
► we have-
Hence, it has a local maximum at (0, 0).
Constrained Maxima and Minima
► A problem of the form “Find the maximum and minimum
values of the function f(x, y) on the curve g(x, y) = 0.” is one type
of constrained optimization problem. The function being
mazimized or minimized, f(x, y), is called the objective function.
The function, g(x, y), whose zero set is the curve of interest, is
called the constraint function.
► In this case, to find the stationary points, we apply Lagrange
Multiplier Method.
Lagrange Multiplier Method
►
Lagrange Multiplier Method
► Example(1): Find the maximum and minimum of function
on
the ellipse
Sol:
The objective function is
and the constraint function is
The first order derivatives of these functions are
So, according to the method of Lagrange multipliers, we need to find all
solutions to
Continue..
►
Lagrange Multiplier Method
► Example(2): Find the minimum of function
on the straight line
Sol:
The objective function is
and the constraint function is
So according to the method of Lagrange multipliers using first order
derivatives of these
functions, we need to find all solutions to
Continue..
►
Jacobian
► Jacobian or Jacobian Determinant: It is common to write the
Jacobian as a determinant, but there is also another useful notation.
► The Jacobian is especially useful when changing variables from (x,
y ) to (u, v ) in multiple integrals.
► Jacobians and their inverses:
Jacobian
► Three standard transformations:
There are certain transformations which occur very frequently :
(1) Cartesian to polar coordinates
(2) Cartesian to spherical polar coordinates
(3) Cartesian to cylindrical polar coordinates.
► Cartesian to polar coordinates Transformation:
The transformation is:
The Jacobian is:
Jacobian
► Cartesian to cylindrical polar coordinates Transformation:
The transformation is:
The Jacobian is:
► Cartesian to spherical polar coordinates Transformation:
The transformation is:
The Jacobian is:
Jacobian
► If u(x, y ) and v (x, y ) are functionally INDEPENDENT then –
► Example(1):
Test whether there is a functional dependence
between the functions f(x, y) and g(x, y) if
Sol:
So, the functions are not independent. With a little work you’ll
spot that the dependence is
Jacobian
► Example(2):
Test whether there is a functional dependence between the
functions f and g if
Sol:
So, the functions are not independent. With a little work you’ll
spot that the dependence is
Jacobian
► Partial Derivatives of Implicit Functions:
Suppose we are given two equations –
Then-
A similar result holds for
Jacobian
►
Continue..
Therefore, we get-
Jacobian
►
Continue..
Therefore, we get-
Jacobian: Chain Rule
► The Chain Rule:
(1) Assuming that u = u(r, s) , v = v(r, s) , r = r(x, y) and s = s(x, y) then-
(2) Assuming that u = u(x, y, z) , v = v(x, y, z), w = w(x, y, z), x = x(r, s, t), y =
y(r, s, t) and z = z(r, s, t) then-
Jacobian
►
► Jacobian