Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
38 views116 pages

Chapter 2

Chapter 2 of the document covers key concepts in matrix algebra, including determinants, adjugate matrices, Cramer's Rule, and applications in engineering. It explains properties of determinants, methods for calculating them, and the conditions for a matrix to be invertible. Additionally, it discusses solving systems of linear equations and provides examples and exercises for practice.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views116 pages

Chapter 2

Chapter 2 of the document covers key concepts in matrix algebra, including determinants, adjugate matrices, Cramer's Rule, and applications in engineering. It explains properties of determinants, methods for calculating them, and the conditions for a matrix to be invertible. Additionally, it discusses solving systems of linear equations and provides examples and exercises for practice.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 116

MATRIX ALGEBRA Chapter 2

CHAPTER 2 (10 HOURS)

1. Determinants, properties of determinants


2. Adjugate matrix
3. Cramer’s Rule
4. Application to Engineering
5. Rank of the matrix
6. Inverse matrix
7. Application of matrices
8. Linear Equations in Linear Algebra
2.1 DETERMINANTS, PROPERTIES OF DETERMINANTS

 What is a determinant?
 How can we compute determinant?
 What is it used for?
DEFINITION

Let A is a square matrix.


Determinant of A is denoted by:

det  A  or A
DEFINITION A scalar

DETERMINANT All entries

Size:
square

1 2 3 ≥3
 a b a b c  a11 a12 ... a1n 
A  a11  A 
 c d 
 A  d e f  a a22 ... a2 n 
A  21
 g h i   ... ... ... ... 
det  A  a11 det  A  ad  bc  
det  A    an1 an2 ... ann 
 aei  bfg  cdh 
Cofactor expansion/
 gec  hfa  idb  Laplace expansion
MINOR MATRIX
Let Aij denote the sub matrix formed by deleting the ith row and jth
column from matrix A.
→ Aij called minor matrix of A.

A32 is obtained by crossing


out row 3 and column 2
COFACTOR
Cofactor of (i,j) entry

Example.

a) Find the value of C23


b) Find all cofactors of the matrix A
SOLUTION
column 3

row 2

C23 : the cofactor of (2,3) entry → delete row 2, col 3.


COFACTOR EXPANSION

det ( A ) =∑ ( entry ) x ( cofactor )


Cofactor=( sign ) x det ( minor )
EXAMPLE
Use a cofactor expansion across the third row to compute det A, where


The third row entries?


The third row cofactors?


Cofactor expansion rule?


What about the other row/column? Your conclusion?
EXAMPLE
PROPERTIES
PROPERTIES a) Scalar Multiple

b) Transposition

c) Multiplicative

DETERMINANT d) Power

e) Inverse

Triangular matrix

Elementary
Operations
Properties
For all nxn matrices A, B:
n
a . det ( cA )=c det ( A )
b . det ( A ) =det ( A )
T

c . det ( A . B )=det ( A ) . det ( B )


k
d . det ( A )= ( det ( A ) )
k

1
e .det ( A )=
−1

det ( A )
f .det ( A+ B ) ≠ det ( A ) +det ( B )
Example

Verify the above properties.


Determinant of Triangle matrix.

Strategy?
EXAMPLE
Elementary Row/Col Operations


The same for column operations.

Meaning?
Example – WHY?

A. D.

B.
E.

i) If A has a zero row/ zero


C. column then det(A)=0

ii) If two distinct rows/columns


of A are identical then det(A)=0
Determinant with Elementary Operations.

Calculate the determinant of the matrix A below, using


elementary operations.

Step 1. Select one column (or one row) of the matrix


Step 2. Choose one nonzero element of the selected column (or selected row).
Using row (or column) operations to eliminate all others elements except
selected).
Step 3. Expand the determinant according to the selected row (or column)
EXAMPLE

12

10

Column 1
6
Column 2
Column 3

0
Row 1 Row 2 Row 3 Row 4
EXAMPLE

12

10

Column 1
6
Column 2
Column 3

0
Row 1 Row 2 Row 3 Row 4
Determinant with Elementary Operations.

Using elementary operations to calculate:


Strategy: use some transformation to reduce computation and
simplify the arithmetic

PDCA procedure.
Example (self reference)
0 2 1 9 2 2 4 6 1 1  2 3 r    3r r 1 1 2 3
3 1 3

2 2  4 6 r  r 1 2
0 2 1 9 0 2  1 9 r   3r r
4 1 4
0 2 1 9
  2   2
3 2 2 1 3 2 2 1 3 2 2 1 0 1 4 8
3 4 2 0 3 4 2 0 3 4 2 0 0 7 4 9

1 1  2 3 r   2 r r 1 1  2 3 1 1 2 3
3 2 3
r  r
2 3
0  1 4  8 r   7 r r 0  1 4
4 2 4
8 0 1 4 8
  2   2 2.7
0 2 1 9 0 0 7 7 0 0 1 1
0 7 4 9 0 0 24  47 0 0 24  47
1 1 2 3
r    24 r  r
4 3 4
0 1 4 8
 2.7 2.7.1.  1.1.  23
0 0 1 1
0 0 0  23
CLASSWORK.

Use row operations to show that the determinants in


Exercises 2–4 are all zero.
USING CALCULATOR
2.2 ADJUGATE MATRIX (ADJOINT)

The adjugate matrix of A is the matrix

 c11 c21 ... cn1 


c c22 ... cn 2 
adjA   12

 ... ... ... ... 


 
 c1n c2 n ... cnn 
EXAMPLE
1  2 0 
  11 3 1 12 1 1
A  1 3  1 . We have c11  1  3, c 12  1   3,
0 1 2 1
 2 0 1 
c11 3, c12  3, c13  6  3 2 2
c21 2, c22 1, c23  4,  adjA   3 1 1 
c 2, c 1, c 5   6  4 5
31 32 33
EXAMPLE

Given that

A) Compute the adjugate of A


B) Calculate A(adj A) and (adj A)A
2.3 CRAMER’S RULE
2.4 APPLICATION TO ENGINEERING

 A formula for Inverse matrix


 Condition for invertible
 Volume - Area
 Crammer Rules
2.4 INVERSE FORMULA

Example. Find the inverse of matrix A, given that

1  1 2 
A  0 2  1
 0 0 1 
EXAMPLE
1  1 2 
A  0 2  1
 0 0 1 

 2 1  3
 det A 2 and adjA=  0 1 1 
 0 0 2 

 2 1  3  1 1 / 2  3 / 2 
1
 A   0 1 1   0 1 / 2 1 / 2 
1

2
 0 0 2   0 0 1 
EXAMPLE

Ans: 13/180

1 1
(A )
- 1
32
=
det A
(adjA ) =
32 det A
c23 (A )
EXERCISES
Compute the adjugate of the given matrix, and then use
Theorem 8 to give the inverse of the matrix
INVERTIBLE

A square matrix A is invertible if and only if det(A)≠0.

Example
PARALLELOGRAM - PARALLELEPIPED
PROPERTIES
EXAMPLE
Calculate the area of the parallelogram determined by the
points (-2, -2); (0, 3); (4, -1) and (6, 4)
EXERCISES
Find the area of the parallelogram whose vertices are listed.
EXERCISES
THEOREM 10
2.8 LINEAR EQUATIONS IN LINEAR ALGEBRA

Equations Variables Coefficients Constant


(=Unknowns
)
term
ax + by = c x,y a, b c

a1x1 + a2x2 + ··· + anxn x1, x2, ..., a1, a2, ..., b
=b xn an
SYSTEM OF LINEAR EQUATION

A system of m linear equations in n unknowns has the


form
SOLUTION
The solution of system is such a collection n of the numbers s1,
s2, …, sn which, being substituted into system for the unknowns
x1, x2, …, xn turns all the equations of the system into identities.
Inconsistent Consistent
(không tương thích) (tương thích)

No solutions Unique solution Infinitely many solutions


( vô nghiệm) (nghiệm duy nhất) (vô số nghiệm)

The set of all solutions of a linear system is called the solution


set of the system.
SOLUTION
Theorem. Any system of linear equations has one of the
following exclusive conclusions.
(a) No solution.
(b) Unique solution.
(c) Infinitely many solutions.

+ no solution  inconsistent
+ at least one solution consistent
WHY?
GEOMETRIC INTERPRETATION
NOTES
 A linear equation of two variables represents a straight line in R2.
 A linear equation of three variables represents a plane in R3.
 In general, a linear equation of n variables represents a
hyperplane in the n-dimensional Euclidean space Rn.
EXAMPLE 1

 x  2 y 1  x  y  z 1
 
 x  2 y 3  x  y  z 3
no solution (0,2,1), (t,2-t,1)
(2,0,1)
Inconsistent Consistent
(infinitely many
solutions)
(t,2-t,1) is called a general solution and given in
parametric form, t is parameter ( t is arbitrary)
MATRIX FORM
A general system of linear equations

can be written as the matrix equation form: A.x b


CRAMER’S RULE

Cramer's rule is a method for solving nxn systems of


equations using determinants.
Generally it is less preferable than Gaussian elimination
or Gauss-Jordan as there are more operations involved.
However, in some circumstances it is a preferred method.
CRAMER’S RULE

Let A be an nxn matrix

Consider the system of equations


CRAMER’S RULE
FORMULA FOR CALCULATING I_TH VAR
Let Ai is the matrix obtained by replace the i_col in
matrix A by the constant col (constant matrix B)
det  Ai 
xi 
det A
EXAMPLE
REMARKS
If A is a square matrix then:
+ A is invertible then unique solution
+ A has zero determinant: no solution or infinite many
• No solution: exist Ai has none zero determinant
• Infinite many solution: all Ai has zero determinant (1 st
condition, must be check by Gauss)
EXAMPLE
Find x1, given the following system of equations.

Notes.
Cramer’s rule is not an efficient way to solve linear systems or
invertible matrices. True, it enabled us to calculate x1 here without
computing x2 or x3.
For large systems of equations, the number of computations
needed to find all the variables by the Gaussian algorithm is
comparable to the number required to find one of the
determinants involved in Cramer’s rule.
THE MATRIX EQUATION AX=B
A general system of linear equations

can be written as the matrix equation form: A.x b


ELEMENTARY OPERATIONS
Interchange two equations (type I)

Multiply one equation by a nonzero number (type II)

Add a multiple of one equation to a different equation


(type III)
MATRICES OF A LINEAR SYSTEM

Definition. The augmented matrix of the general linear


system is the table:

and the coefficient matrix is


EXAMPLE
3 x1  2 x2  x3  x4  1 3 2  1 1  1
  
2 x1  x3  2 x4  0 2 0 1 2 0 
3 x  x  2 x  5 x  2  3 1 2 5 2 
 1 2 3 4

augmented matrix

coefficient matrix
constant matrix
 3 2  1 1   1
 2 0  1 2 0 
   
 3 1 2 5   2 
EXAMPLE

For the system:

Coeficient matrix:

Augmented matrix:
REMARKS
 Systems of linear equations can be represented by matrices.
 Operations on equations (for eliminating variables) can be
represented by appropriate row operations on the
corresponding matrices.
12

10

Column 1
6
Column 2
Column 3

0
Row 1 Row 2 Row 3 Row 4
12

10

Column 1
6
Column 2
Column 3

0
Row 1 Row 2 Row 3 Row 4
GAUSS-JORDAN ELIMINATION
(FOR SOLVING A SYSTEM OF LINEAR EQUATIONS)
GAUSS-JORDAN ELIMINATION
Step 1.Using elementary row  1 0  10 0   x  0 y  10 z 0
  
operations, augmented matrix   0 1 3 0   0 x  1 y  3 z 0
reduced row-echelon matrix  0 0 0 1  0 x  0 y  0 z 1

Step 2. If a row [0 0 0…0 1] occurs,


the system is inconsistent reduced row echelon matrix

Step 3. Otherwise, assign the  1 0  10 0  1x  0 y  10 z 0


  
nonleading variables as parameters,  0 1 3 0   0 x  1 y  3 z 0
solve for the leading variables in  0 0 0 0  0 x  0 y  0 z 0

terms of parameters

z=t (parameter) z is nonleading variable


EXAMPLE
Solve the following system of linear equations
EXAMPLE

inconsistent
EXAMPLE
 1  2  1 3 1  1  2  1 3 1  1  2  1 3 1  1  2 0 1 2
       
 2  4 1 0 5    0 0 3  6 3    0 0 1  2 1    0 0 1  2 1 
 1  2 2  3 4   0 0 3  6 3  0 0 0 0 0   0 0 0 0 0 

leading one
 x1  2 x2  x4 2

 x3  2 x4 1
 x1 2  2t  s
x2,x4 are nonleading (free) variables, x  t
 2
so we set x2=t and x4=s (parameters)  t , s   
and then compute x1, x3  x3 1  2s
 x4  s
SOLUTION
General solution or the parametric form of the solution
set
 x1 2  2t  s
x  t
 2
 t , s   
 x3 1  2 s
 x4  s

Particular solution: (by choosing value for parameters)


Example set
 x1 4
 x 1  x1 , x2 , x3 , x4  4, 1, 1, 0 
 2

 x3 1
 x4  0 4, 1, 1, 0 
EXAMPLE

12

10

Column 1
6
Column 2
Column 3

0
Row 1 Row 2 Row 3 Row 4
KRONECKER CAPELLI THEOREM

If r(A|b)≠r(A), then the system A.X=b is inconsistent.


If r(A|b)=r(A), then the system A.X=b is inconsistent.
+ If r(A|b)=r(A)=number of unknowns  unique.
+ If r(A|b)=r(A)<number of unknowns  infinitely many.

Note. Suppose a system of m equations in n variables has


a solution. If the rank of the augmented matrix is r then the
set of solutions involves exactly n-r parameters
EXAMPLE leading one

 1  2  1 3 1  1  2  1 3 1  1  2  1 3 1
     
 2  4 1 0 5    0 0 3  6 3    0 0 1  2 1 
 1  2 2  3 4   0 0 3  6 3  0 0 0 0 0
rankA=2

4 (number of variables)
- 2 (rankA or nonzero rows)
=2 (two parameters : x2=t, x4=s)
THE MATRIX EQUATION AX=0: NULL SPACE

Homogeneous system
Special case
HOMOGENEOUS SYSTEM
The system is called homogeneous (thuần nhất) if the
constant matrix has all the entry are zeros

A.x 0

Note that x = 0 is always a solution for a homogeneous system,


called the zero solution (or trivial solution); solutions other than
the zero solution 0 are called nontrivial solutions.
HOMOGENEOUS SYSTEM

 Note that every homogeneous system has at least one


solution (0,0,…,0), called trivial solution (nghiệm tầm
thường)

 If a homogeneous system of linear equations has


nontrivial solution (nghiệm không tầm thường) then it
has infinite family of solutions (vô số nghiệm)
EXAMPLE
Show that the following homogeneous system has
nontrivial solutions
 x1  x2  2 x3  x4 0

2 x1  2 x2  x4 0
3 x  x  2 x  x 0
 1 2 3 4
EXAMPLE

Hệ thuần nhất mà số ẩn
nhiều hơn số pt
Chắc chắn vô số nghiệm
STRUCTURE OF SOLUTION SET

 Homogeneous
 Non homogeneous
SOLUTION SET OF HOMOGENEOUS SYSTEM
Theorem. Let Ax = 0 be a homogeneous system. If u and v
are solutions, then the addition and the scalar multiplication
u + v; cu
are also solutions.

 Moreover, any linear combination of solutions for a homogeneous


system is again a solution.
 This structure is called by vector space.
 Null space
SOLUTION SET OF HOMOGENEOUS SYSTEM

Theorem. Let Ax = 0 be a homogeneous linear system,


where A is an mxn matrix with p pivot positions. Then
system has n-p free variables and n-p basic solutions.

 The basic solutions can be obtained as follows: setting one free


variable equal to 1 and all other free variables equal to 0.
SOLUTION SET OF NONHOMOGENEOUS SYSTEMS

Proposition. Let u and v be solutions of a nonhomogeneous


system Ax = b. Then the difference:
u-v
is a solution of the corresponding homogeneous system Ax = 0.

Theorem.
Let xnonh be a solution of a nonhomogeneous system Ax = b.
Let xhom be the general solutions of the corresponding homogeneous
system Ax = 0. Then:
x = xnonh + xhom
are the general solutions of Ax = b.
EXAMPLE
Find the solution set for the nonhomogeneous system.
EXAMPLE
Solution set.

Hyperplane in R4 space.
KEY TERMS

Coefficient – Variable – Constant


Gauss elimination
Unique solution; no solution; infinitely many solutions
Parametric form – Parameter
Consistent – Inconsistent
Homogeneous
Trivial – nontrivial
Rank vs Solution
2.7 APPLICATION

 Equilibrium prices model


 Input – Output model
 IS-LM model (self)
EQUILIBRIUM PRICES MODEL

Partial equilibrium market model: a model of price


determination in a single market.
Three variables:
 Qd =the quantity demanded of the commodity
 Qs =the quantity supplied of the commodity
 P =the price of commodity.

And the equilibrium condition is


Qd =Qs .
SINGLE MARKET

The model setup is then:

Where a, b, c and d are all positive.


SINGLE MARKET

12

10

Column 1
6
Column 2
Column 3

0
Row 1 Row 2 Row 3 Row 4
TWO COMMODITY MARKET MODEL

12

10

Column 1
6
Column 2
Column 3

0
Row 1 Row 2 Row 3 Row 4
TWO COMMODITY MARKET MODEL

12

10

Column 1
6
Column 2
Column 3

0
Row 1 Row 2 Row 3 Row 4
GENERALIZATION

As more and more commodities are incorporated into the


model, it becomes more and more difficult to solve the
model by substitution.
A method suitable for handling a large system of
simultaneous equations is matrix algebra.

Matrix algebra provides a compact way of writing an


equation system; it leads to a way to test the existence of a
solution by evaluation of a determinant - a concept closely
related to that of a matrix; it also gives a
method to find that solution if it exists.
RECONSIDER 2 COMMODITY MARKET MODEL
LEONTIEF INPUT-OUTPUT MODEL

Used to describe the relationship of industries within a


sector
 International
 National
 Regional
 Within a business
LEONTIEF INPUT-OUTPUT MODEL
Suppose a nation’s economy is divided into n sectors that
produce goods or services, and let x be a production vector
in Rn that lists the output of each sector for one year.
Also, suppose another part of the economy (called the open
sector) does not produce goods or services but only
consumes them, and let D be a final demand vector (or bill
of final demands) that lists the values of the goods and
services demanded from the various sectors by the
nonproductive part of the economy.
The vector D can represent consumer demand, government
consumption, surplus production, exports, or other external
demands.
OPEN MODEL
OPEN MODEL ILLUSTRATION– 3 SECTORS

x12
Sector Sector
x11 1 2
x21 x22
X1 X2

x23
x13 x31 b2
x32 b1
Extern
Sector
al
3
Dema
X3 b3 nd

x33 OPEN MODEL


LEONTIEF INPUT-OUTPUT MODEL

As the various sectors produce goods to meet consumer


demand, the producers themselves create additional
intermediate demand for goods they need as inputs for
their own production.
The interrelations between the sectors are very complex,
and the connection between the final demand and the
production is unclear.
Leontief asked if there is a production level x such that
the amounts produced (or “supplied”) will exactly
balance the total demand for that production,
ASSUMPTIONS OF THE MODEL
Assumption 1. Production of each commodity requires the use
of at least one other commodity as input
Assumption 2 (Linearity). The amount of an input required is
proportional to the level of the output, i.e., for any input i (i=1,2,
…,n)

where aij is assumed to be constant.


Assumption 3 (No Joint Production). Each production technique
produces only one output.
LEONTIEF INPUT-OUTPUT MODEL

C is the consumption matrix

 c11 c12 ... c1n 


  xij
c c ... c cij 
C  21 22 2n 
xj
 ..................................... 
 
 cn1 cn 2 ... cnn 
INPUT – OUTPUT MODEL
 x1 c11 x1  c12 x2    c1n xn  d1
 x c x  c x    c x  d
 2 21 1 22 2 2n n 2
 or


 xn cn1 x1  cn 2 x2    cnn xn  d n
 x1   c11 c12 ... c1n   x1   d1 
      
 x2   c21 c22 ... c2 n   x2   d 2 

 ...   .....................................   ...   ... 
      
 xn   cn1 cn 2 ... cnn   xn   d n 

X C. X  D
10/10/2019 103
THE ASSUMPTIONS (1)–(3)
Under the assumptions (1)–(3), the production system

X C. X  D
satisfies the following conditions:
(a) All elements of the C matrix are non-negative, i.e., cij≥0
and for all j there exists some i , such that cij > 0.
(b) Each commodity is produced only with one production
technique, i.e. C is a square matrix.
(c) There is no joint production.
(d) There are constant returns to scale for all production
techniques, i.e. C remains unchanged when the output vector
x changes.
UNIT CONSUMPTION VECTOR

The basic assumption of Leontief’s input–output model is


that for each sector, there is a unit consumption vector
in Rn that lists the inputs needed per unit of output of the
sector.
All input and output units are measured in millions of
dollars, rather than in quantities such as tons or bushels.
(Prices of goods and services are held constant.)
EXAMPLE
Suppose the economy consists of three sectors—
manufacturing, agriculture, and services—with unit
consumption vectors c1, c2, and c3, as shown in the table
that follows.

What amounts will be consumed by the manufacturing


sector if it decides to produce 100 units?
THEOREM

column sum denotes the sum of the entries in a column of a


matrix. Under ordinary circumstances, the column sums of a
consumption matrix are less than 1 because a sector should
require less than one unit’s worth of inputs to produce one
unit of output.
INPUT – OUTPUT MODEL

X  I  C  D
1
X C. X  D

The entries in (I – C)–1 are significant because they


can be used to predict how the production x will
have to change when the final demand d changes.
In fact, the entries in column j of (I – C)–1 are the
increased amounts the various sectors will have to
produce in order to satisfy an increase of 1 unit in
the final demand for output from sector j

10/10/2019 108
PRACTICE PROBLEM
Suppose an economy has two sectors: goods and services.
 One unit of output from goods requires inputs of 0.2 unit
from goods and 0.5 unit from services.
 One unit of output from services requires inputs of 0.4
unit from goods and 0.3 unit from services.
 There is a final demand of 20 units of goods and 30 units
of services.
Set up the Leontief input–output model for this situation.
EXERCISES
An economy that is divided into three sectors—manufacturing,
agriculture, and services.
 For each unit of output, manufacturing requires 0.10 unit from other
companies in that sector, 0.30 unit from agriculture, and 0.30 unit from
services.
 For each unit of output, agriculture uses 0.20 unit of its own output, 0.60 unit
from manufacturing, and 0.10 unit from services.
 For each unit of output, the services sector consumes 0.10 unit from services,
0.60 unit from manufacturing, but no agricultural products.

A) Construct the consumption matrix for this economy,


B) Determine what intermediate demands are created if
agriculture plans to produce 100 units.
B) Determine the production levels needed to satisfy a final
demand of 20 units for manufacturing, 20 units for agriculture,
and 0 units for services
EXAMPLE
Solve the Leontief production equation for an economy
with three sectors, given that:
 0, 2 0, 2 0   40 
   
C  0,3 0,1 0,3  D  60 
 0,1 0 0, 2   80 
   
EXERCISE

Note. Sometimes consumption matrix C is also denoted by A.


ASSUMPTION 1

Although Assumption 1 is sufficient for establishing a


relation between the commodity inputs and the output,
the nature of this relation is not specified.
However, in the analysis of the production structure one
needs to specify the properties of such a relation.
A historically important and widely used assumption
is the following.
ASSUMPTION 2

Linearity assumption, despite its simple and innocuous nature,


has far more reaching implications. First, as is stated above
this assumption implies that the inputs required per unit of
output remains invariant as the scale of the production
changes. Therefore linearity assumption implies constant
returns to scale in production.
In other words, by making this assumption production
technologies that exhibit decreasing (or increasing) returns to
scale are excluded. Second, the fact that ai is constant implies
that substitution is not allowed among inputs. The production
technology allows only one technique to be operated.
Although such assumption may be assumed to hold in the
(very) short run, it is too restrictive to represent the actual
choices that a production unit faces.
ASSUMPTION 3

One other assumption implicit in the above framework is


the absence of joint production. This means each
production technique produces only one commodity.
In reality this may not be the case. Consider a petroleum
refinery. Refining crude oil is a technique that leads to
multiple oil products, various types of fuel and liquid gas.
In this case, the production technique can not be labelled
by referring to a specific product. In this chapter we shall
not deal with the problem of joint production.

You might also like