Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
64 views3 pages

Optimization with Constraints

This document discusses methods for finding feasible directions to minimize an objective function subject to constraints. It states that the search vector definition s"=-PΔf serves as a test for optimality. It describes two strategies for handling inequality constraints: using slack variables or an acute constraint strategy. The acute constraint strategy is preferred for projection methods as it reduces the size of P and A. Formulas are provided for calculating Lagrange multipliers given A and Δf. Linear approximations are used to determine a locally good direction for search that maximizes descent and leads to feasible points.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views3 pages

Optimization with Constraints

This document discusses methods for finding feasible directions to minimize an objective function subject to constraints. It states that the search vector definition s"=-PΔf serves as a test for optimality. It describes two strategies for handling inequality constraints: using slack variables or an acute constraint strategy. The acute constraint strategy is preferred for projection methods as it reduces the size of P and A. Formulas are provided for calculating Lagrange multipliers given A and Δf. Linear approximations are used to determine a locally good direction for search that maximizes descent and leads to feasible points.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

P is rymmetric and positive semidefinite.

fT.S=fT(-Pf)=-(f)TP(f)0 (p is +ive semi definite)


(f)T.S0s is a descent direction

If s=-P.f=0f is to the constraint surface

f=ATr (f=l.c of all the set of constraint normal)

Since the rows of A are the coeffi vectors of linear constraints there is no difference between
equations.
f=ATr and f- uk.ak=0=lagrangian
With r=v=lagrange multipliers
Necessary conditions
f-vk.hk(x)=0;hk(x)
Are to constraint surface of source ak

the lagrange multipliers will be given by

T -1
v=r= ((AA )) A(f)

Proof

f=A
Tr = Tr

(T)-1Af=r=v

Note:search vector definition s=-pf serves as a test for optimality

Treatment of inequality constraints


Two strategies
1)slack variables
2)acute constraint strategy
In projection methods later is preffered as it reduces the size of P & A .A has first k rows of equality
constarints last M rows of acute inequality constaints i.e those inequality constarints which are
aTJx(t)_bj=0(j=1,.M)
P is calculated from this A . when it comes to lagrange multiplier v=(AAT)-1Af, the multiplier of
active inequality constraints by KTC are expected to be non negative at optimal point.
Please look at the gradiant algo.
Direction Generation methods
Based on linearization

We inilize the linear approximations to determine a locally good direction for search.the direction
should have maximum descent so as to reach the optimum of f(x) (minimum) and it shuld lead
to fesible point such that constraints are not violated.

Methods of fessible directions


Inequality constrained problem
Minimize f(x) subject to gj(x)0,j=1,2,3,J

Let x(1) be a starting point.it satisfies all constraints i.e g j(x(1)) 0 J=1,2,3.J

Suppose that a certain subset of these constarints are active or binding at x (1)

1)direction d is a good search if it were a descent

f(x(1)).d<0 1

And if for a small distance along the ray

X()=x(1)+d with 0

The points x along d are feasible points to a first order approximation , points x() along d will be feasible
if for all active constarints at.

X(1),we have gj(x;x(1))gj(x(1))+gj(x(1))(x-x(1)) 0

Since gj(x(1))=0 and x-x(1)=d with 0

gj(x(1))d0 2

Direction d that satisfies the above inequalities at x(1) is called feasible direction.

You might also like