Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
4 views16 pages

Lecture 33

The document discusses constrained optimization problems, focusing on methods such as the Lagrangian method for equality constraints and the KKT conditions for inequality constraints. It outlines the formulation of optimization problems, the use of Lagrange multipliers, and the necessary conditions for optimality. Additionally, it provides examples and verification methods for identifying local and global optima in constrained optimization scenarios.

Uploaded by

rw9vdkw5q2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views16 pages

Lecture 33

The document discusses constrained optimization problems, focusing on methods such as the Lagrangian method for equality constraints and the KKT conditions for inequality constraints. It outlines the formulation of optimization problems, the use of Lagrange multipliers, and the necessary conditions for optimality. Additionally, it provides examples and verification methods for identifying local and global optima in constrained optimization scenarios.

Uploaded by

rw9vdkw5q2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

NLP: CONSTRAINED,

MULTIPLE VARS
… where we see optimality conditions for multivariate
constrained nonlinear optimization problems
NLP: CONSTRAINED,
MULTIPLE VARS
With only equality constraints
- The Lagrangian Method
Lagrange

Constrained Optimization with Equality Constraints


• Suppose we have an optimization problem of the following type:
max 𝑓(𝒙)
𝑔𝑖 𝒙 = 𝑏𝑖 for i = 1, … , 𝑚
where 𝑓(𝒙) and any of the 𝑔𝑖 (𝒙) may be non-linear and 𝒙 = 𝑥1 , 𝑥2 , … , 𝑥𝑛

• Given this problem, we can write a related unconstrained optimization problem:

max ℎ(𝒙, 𝝀)
𝑥,𝜆
𝑚

ℎ 𝒙, 𝝀 ≔ 𝑓 𝒙 − 𝜆𝑖 (𝑔𝑖 𝒙 − 𝑏𝑖 )
𝑖=1

where we have added 𝑚 new variables 𝜆1 , 𝜆2 , … , 𝜆𝑚 called Lagrange multipliers

• The objective function ℎ 𝒙, 𝝀 is called the Lagrange function


Constrained Optimization with Equality Constraints
max ℎ(𝒙, 𝝀)
max 𝑓(𝒙) 𝑥,𝜆
𝑚
𝑔𝑖 𝒙 = 𝑏𝑖 for i = 1, … , 𝑚
ℎ 𝒙, 𝝀 ≔ 𝑓 𝒙 − 𝜆𝑖 (𝑔𝑖 𝒙 − 𝑏𝑖 )
𝑖=1

• We can find the stationary points of ℎ 𝒙, 𝝀 in the same way that we found the
stationary points of any other unconstrained multi-var NLP
• We now have 𝑛 + 𝑚 decision variables in ℎ 𝒙, 𝝀
• So take the gradient of ℎ, set it to zero, and solve for 𝑥 and 𝜆
• Suppose 𝒙∗ , 𝝀∗ = 𝑥1∗ , 𝑥2∗ , … , 𝑥𝑛∗ , 𝜆1∗ , 𝜆∗2 , … , 𝜆∗𝑚 is a stationary point
𝜕ℎ 𝒙,𝝀
• Recall that = 𝑔𝑖 𝒙 − 𝑏𝑖 = 0 at any stationary point of ℎ 𝒙, 𝝀
𝜕𝜆𝑖
• So every stationary point corresponds to a feasible solution to our constrained optimization problem

• The stationary points of the Lagrangian are critical points of the constrained
optimization problem
• They may be local or global optima, or they could be neither
• But if 𝒙∗ = 𝑥1∗ , 𝑥2∗ , … , 𝑥𝑛∗ is a constrained optimum,
then there must be some 𝝀∗ = 𝜆1∗ , 𝜆∗2 , … , 𝜆∗𝑚 such that (𝒙∗ , 𝝀∗ ) is a stationary point of the Lagrangian
• This gives us a necessary condition for the optima of the constrained problem

• After finding the stationary points of the Lagrangian, pick the maximizer among them
• Caution: Remember to verify if the picked point is a maximizer/minimizer
Example

Constrained Optimization with Equality Constraints


• Consider the following constrained NLP:
max 𝑥1 + 𝑥2
𝑥12 + 𝑥22 = 1

• The Lagrangian function is ℎ 𝒙, 𝝀 = 𝑥1 + 𝑥2 − 𝜆1 𝑥12 + 𝑥22 − 1

• Find stationary points by setting 𝛻ℎ 𝒙, 𝝀 = 0:


𝜕ℎ 𝒙,𝝀
• = 1 − 2𝜆1 𝑥1 = 0
𝜕𝑥1
𝜕ℎ 𝒙,𝝀
• = 1 − 2𝜆1 𝑥2 = 0
𝜕𝑥2
𝜕ℎ 𝒙,𝝀
• = − 𝑥12 + 𝑥22 − 1 = 0
𝜕𝜆1

• We get a system of nonlinear equations, which may be difficult to solve


Example

Constrained Optimization with Equality Constraints


𝜕ℎ 𝒙, 𝝀
= 1 − 2𝜆1 𝑥1 = 0
• From the first two equations, we have: 𝜕𝑥1
𝜕ℎ 𝒙,𝝀 1
𝜕ℎ 𝒙, 𝝀
• = 1 − 2𝜆1 𝑥1 = 0, implies 𝑥1∗ = = 1 − 2𝜆1 𝑥2 = 0
𝜕𝑥1 2𝜆1∗ 𝜕𝑥2
𝜕ℎ 𝒙,𝝀 1 𝜕ℎ 𝒙, 𝝀
• = 1 − 2𝜆1 𝑥2 = 0, implies 𝑥2∗ = = − 𝑥12 + 𝑥22 − 1 = 0
𝜕𝑥2 2𝜆1∗ 𝜕𝜆1

• Substituting into the third equation, we have:


𝜕ℎ 𝒙,𝝀 1 1 1
• = − 𝑥1∗ 2 + 𝑥2∗ 2 − 1 = − + − 1 = 0, implies 𝜆1∗ = ±
𝜕𝜆1 2𝜆1∗ 2 2𝜆1∗ 2 √2

• Taking each of the two possible values for 𝜆1∗ , we find two stationary points
1 1 1 1 1
• 𝒙∗ , 𝝀∗ = , , , i.e., 𝑥1∗ , 𝑥2∗ = ,
2 2 2 2 2
1 1 1 1 1
• 𝒙∗ , 𝝀∗ = − ,− ,− , i.e., 𝑥1∗ , 𝑥2∗ = − ,−
2 2 2 2 2

• The stationary point with the best objective value is 𝑥1∗ , 𝑥2∗ =
Is this a maximizer or a minimizer?
Example: Verification by the Graphical Method

Constrained Optimization with Equality Constraints


𝑥2 max 𝑥1 + 𝑥2
• Feasible region: 𝑥12 + 𝑥22 = 1

𝑥1
Example: Verification by the Graphical Method

Constrained Optimization with Equality Constraints


𝑥2 max 𝑥1 + 𝑥2
• Critical points: 𝑥12 + 𝑥22 = 1

1 1
,
2 2

𝑥1

1 1
− ,−
2 2
Example: Verification by the Graphical Method

Constrained Optimization with Equality Constraints


𝑥2 max 𝑥1 + 𝑥2
• Iso-line: 𝑥12 + 𝑥22 = 1

𝑥1 + 𝑥2 = 0
1 1
,
2 2

𝑥1

1 1
− ,−
2 2
Solving graphically, we observe that
1 1
• , is a global maximum
2 2
1 1
• − , − is a global minimum
2 2
NLP: CONSTRAINED,
MULTIPLE VARS
With inequality constraints
- The KKT Conditions Method
Multi-var Constrained Opt

• Suppose we want to solve


max 𝑓(𝒙) subject to
𝑔𝑖 𝒙 ≤ 𝑏𝑖 for i = 1, … , 𝑚
where 𝑓(𝒙) and any of the 𝑔𝑖 (𝒙) may be non-linear
and 𝒙 = 𝑥1 , 𝑥2 , … , 𝑥𝑛
Multi-var Constrained Opt

• Suppose we want to solve


max 𝑓(𝒙) subject to
𝑔𝑖 𝒙 ≤ 𝑏𝑖 for i = 1, … , 𝑚
where 𝑓(𝒙) and any of the 𝑔𝑖 (𝒙) may be non-linear
and 𝒙 = 𝑥1 , 𝑥2 , … , 𝑥𝑛
Karush Kuhn Tucker

KKT conditions

• Suppose we have an optimization problem of the following type:


max 𝑓(𝒙)
𝑔𝑖 𝒙 ≤ 𝑏𝑖 for 𝑖 = 1, … , 𝑚
where 𝑓(𝒙) and any of the 𝑔𝑖 (𝒙) may be non-linear and 𝒙 = 𝑥1 , 𝑥2 , … , 𝑥𝑛

Theorem: If 𝒙 = 𝑥1 , 𝑥2 , … , 𝑥𝑛 is a local or global optimum of the constrained


problem, then there must be values 𝑢 = 𝑢1 , 𝑢2 , … , 𝑢𝑚 such that:
𝜕𝑓 𝒙 𝑚 𝜕𝑔𝑖 𝒙
1. − 𝑖=1 𝑢𝑖 𝜕𝑥 = 0 for 𝑗 = 1,2, … , 𝑛
𝜕𝑥𝑗 𝑗
2. 𝑔𝑖 𝒙 ≤ 𝑏𝑖 for 𝑖 = 1,2, … , 𝑚
3. 𝑢𝑖 ≥ 0 for 𝑖 = 1,2, … , 𝑚
4. 𝑢𝑖 𝑔𝑖 𝒙 − 𝑏𝑖 = 0 for 𝑖 = 1,2, … , 𝑚

• Conditions 1, 2, 3, and 4 are called Karush-Kuhn-Tucker (KKT) conditions


• NOTE: Theorem also requires 𝑓 and 𝑔 to satisfy
some “regularity conditions”, which will be satisfied by the functions
encountered within the scope of this course
Constrained Optimization with Inequality Constraints
(KKT conditions)

• Like the Lagrangian, we can use these conditions to identify critical points that
could be local or global optima

• These points are called KKT points

• After computing the KKT points, pick the maximizer among them
• Caution: Remember to verify if the picked point is a maximizer/minimizer
• How to verify?
1. Graphical method
2. If the objection function 𝑓 is concave and all the constraint functions 𝑔𝑖 are convex,
then the KKT point that you found is a maximizer
Example

Constrained Optimization with Inequality Constraints


(KKT conditions)
• Consider the following constrained NLP:
max 𝑥1 + 𝑥2 𝑓 𝒙 = 𝑥1 + 𝑥2
𝑥12 + 𝑥22 ≤ 1 𝑔1 𝒙 = 𝑥12 + 𝑥22 , 𝑏1 = 1
1 1
𝑥1 ≤ 𝑔2 𝒙 = 𝑥1 , 𝑏2 = 2
2

• The KKT conditions for this problem are:


𝜕𝑓 𝒙 𝜕𝑔1 𝒙 𝜕𝑔2 𝒙
1. − 𝑢1 − 𝑢2 =0
𝜕𝑥1 𝜕𝑥1 𝜕𝑥1 1. 1 − 2𝑢1 𝑥1 − 𝑢2 = 0
2.
𝜕𝑓 𝒙
− 𝑢1
𝜕𝑔1 𝒙
− 𝑢2
𝜕𝑔2 𝒙
=0 2. 1 − 2𝑢1 𝑥2 = 0
𝜕𝑥2 𝜕𝑥2 𝜕𝑥2 3. 𝑥12 + 𝑥22 ≤ 1
3. 𝑔1 𝒙 ≤ 𝑏1 1
4. 𝑥1 ≤
2
4. 𝑔2 𝒙 ≤ 𝑏2 5. 𝑢1 𝑥12 + 𝑥22 − 1 = 0
5. 𝑢1 𝑔1 𝒙 − 𝑏1 = 0 6.
1
𝑢2 𝑥1 − = 0
2
6. 𝑢2 𝑔2 𝒙 − 𝑏2 = 0
7. 𝑢1 , 𝑢2 ≥ 0
7. 𝑢1 , 𝑢2 ≥ 0

• Solving these conditions provides the KKT points


Example

Constrained Optimization with Inequality Constraints


(KKT conditions) 1. 1 − 2𝑢 𝑥 − 𝑢 = 0 1 1 2
2. 1 − 2𝑢1 𝑥2 = 0
• To solve, try different cases for the 𝑢 variables 3. 𝑥12 + 𝑥22 ≤ 1
1
• First choice: 𝑢1 = 0 → Impossible (violates equation 2) 4. 𝑥1 ≤
2
• What if 𝑢1 > 0 and 𝑢2 = 0 5. 𝑢1 𝑥1 + 𝑥22 − 1 = 0
2
1
• 𝑥1 =
1
(by eqn 1) 6. 𝑢2 𝑥1 − = 0
2
2𝑢1
1 7. 𝑢1 , 𝑢2 ≥ 0
• 𝑥2 = (by eqn 2)
2𝑢1
1
• 𝑢1 = (by eqn 5)
2
1 1
• Then 𝑥1 = = which contradicts constraint 4
2𝑢1 2

• What if 𝑢1 > 0 and 𝑢2 > 0?


1
• 𝑥1 = (by eqn 6)
2
3
• 𝑥2 = ± (by eqn 5 with 𝑢1 > 0)
2
3 1
• If 𝑥2 = − , 𝑢1 = − (by eqn 2), which contradicts 𝑢1 > 0
2 3
3 1
• If 𝑥2 = , 𝑢1 = (by eqn 2)
2 3
1
• 𝑢2 = 1 − 3 (by eqn 1)
1 3 1 1 Note: This is the unique KKT point.
• KKT point is 𝑥1 , 𝑥2 , 𝑢1 , 𝑢2 = , , 3,1 −
2 2 3 Need to verify if it is
a maximizer or a minimizer

You might also like