Economic Load Dispatch Using
Reduced Gradient Method
March 15, 2025
Contents
1 Introduction 3
2 Problem Formulation 3
2.1 Objective Function . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2.1 Power Balance Constraint . . . . . . . . . . . . . . . . 3
2.2.2 Generator Capacity Constraints . . . . . . . . . . . . . 4
2.3 Transmission Line Losses . . . . . . . . . . . . . . . . . . . . . 4
3 The Reduced Gradient Method 4
3.1 Mathematical Formulation . . . . . . . . . . . . . . . . . . . . 4
3.2 Algorithm Steps . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4 Code Implementation 5
4.1 Main Program Structure . . . . . . . . . . . . . . . . . . . . . 5
4.1.1 Data Initialization . . . . . . . . . . . . . . . . . . . . 6
4.1.2 Finding Feasible Initial Solution . . . . . . . . . . . . . 6
4.1.3 Main Iteration Loop . . . . . . . . . . . . . . . . . . . 7
4.2 Reduced Gradient Function . . . . . . . . . . . . . . . . . . . 9
5 Key Algorithmic Features 13
5.1 Initial Solution Feasibility . . . . . . . . . . . . . . . . . . . . 13
5.2 Adaptive Step Size . . . . . . . . . . . . . . . . . . . . . . . . 13
5.3 Penalty Factors . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.4 Dependent Generator Selection . . . . . . . . . . . . . . . . . 14
5.5 Convergence Criteria . . . . . . . . . . . . . . . . . . . . . . . 14
1
6 Example Test Case 15
6.1 Generator Data . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6.2 System Demand . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6.3 Expected Results Analysis . . . . . . . . . . . . . . . . . . . . 15
7 Visualization and Analysis 16
7.1 Interpretation of Results . . . . . . . . . . . . . . . . . . . . . 16
8 Conclusion 16
8.1 Future Enhancements . . . . . . . . . . . . . . . . . . . . . . . 17
2
1 Introduction
Economic Load Dispatch (ELD) is a fundamental optimization problem in
power system operation that aims to determine the optimal output of mul-
tiple generating units to meet a specific load demand at the lowest possible
cost while satisfying various operational constraints. This document explains
the implementation of the Reduced Gradient Method for solving the ELD
problem, taking into account transmission line losses.
2 Problem Formulation
2.1 Objective Function
The objective of the ELD problem is to minimize the total generation cost:
N
X
min Ci (PGi ) (1)
i=1
where:
• N is the number of generators
• PGi is the power output of generator i
• Ci (PGi ) is the cost function of generator i
The cost function for each generator is typically modeled as a quadratic
function:
Ci (PGi ) = ai PG2 i + bi PGi + ci (2)
where ai , bi , and ci are the cost coefficients.
2.2 Constraints
2.2.1 Power Balance Constraint
The total power generated must equal the sum of the total demand and the
transmission line losses:
N
X
PG i = PD + PL (3)
i=1
where:
3
• PD is the total load demand
• PL is the total transmission line losses
2.2.2 Generator Capacity Constraints
The power output of each generator is bounded by its minimum and maxi-
mum limits:
PGmin
i
≤ PGi ≤ PGmax
i
(4)
2.3 Transmission Line Losses
Transmission line losses are modeled using simplified B-coefficients:
N
X
PL = Bii PG2 i (5)
i=1
where Bii are the loss coefficients for each generator.
3 The Reduced Gradient Method
3.1 Mathematical Formulation
The reduced gradient method is an optimization technique for solving con-
strained optimization problems. For the ELD problem, we can form the
Lagrangian:
N N
!
X X
L(PG , λ) = Ci (PGi ) + λ PG i − P D − PL (6)
i=1 i=1
where λ is the Lagrange multiplier.
The optimality conditions require:
∂L dCi ∂PL
= +λ 1− = 0, i = 1, 2, . . . , N (7)
∂PGi dPGi ∂PGi
This leads to the concept of the penalty factor for each generator:
1
P Fi = ∂PL
(8)
1− ∂PGi
With the simplified loss model, the penalty factor becomes:
4
1
P Fi = (9)
1 − 2Bii PGi
The reduced gradient method classifies variables into dependent and in-
dependent sets. In the ELD problem, we typically select one generator as the
dependent variable (usually the last one) and adjust it to satisfy the power
balance constraint.
3.2 Algorithm Steps
1. Initialize generator outputs PG and Lagrange multiplier λ
2. Calculate initial losses PL
3. Compute penalty factors P F
4. For each iteration:
(a) Select a dependent generator (typically the last one)
(b) Adjust its output to satisfy the power balance constraint
(c) Calculate the reduced gradient for each independent generator
(d) Update generator outputs using gradient descent
(e) Check for generator limit violations and adjust accordingly
(f) Recalculate losses and update the dependent generator
(g) Check for convergence
4 Code Implementation
4.1 Main Program Structure
The main program is structured as follows:
1. Data initialization
2. Finding a feasible initial solution
3. Main iteration loop calling the reduced gradient function
4. Final adjustments to ensure power balance
5. Results display and visualization
5
4.1.1 Data Initialization
1 % Load ELD data
2 % Format : [a , b , c , pg_min , pg_max , pgi_guess ,
ploss_coeff ]
3 PG_data = [0.004 , 5.3 , 500 , 200 , 450 , 0 , 0.00003;
4 0.006 , 5.5 , 400 , 150 , 350 , 0 , 0.00009;
5 0.009 , 5.8 , 200 , 100 , 225 , 0 , 0.00012];
6
7 N = length ( PG_data (: ,1) ) ; % Number of generators
8 a = PG_data (: ,1) ; % Quadratic cost coefficient
9 b = PG_data (: ,2) ; % Linear cost coefficient
10 c = PG_data (: ,3) ; % Constant cost coefficient
11 pg_min = PG_data (: ,4) ; % Minimum generation limit
12 pg_max = PG_data (: ,5) ; % Maximum generation limit
13 ploss_coeff = PG_data (: ,7) ; % Loss coefficients
14
15 pd = 975; % Demand value in MW
Listing 1: Data initialization for the ELD problem
4.1.2 Finding Feasible Initial Solution
The code initializes the generators with a feasible solution by:
1. Setting most generators to their maximum capacity
2. Estimating initial losses
3. Adjusting the swing generator to balance the system
4. Checking if the swing generator violates its limits and redistributing if
necessary
1 % Initialize with generators at maximum except the last
one
2 pg = zeros (N , 1) ;
3 for i = 1: N -1
4 pg ( i ) = pg_max ( i ) ;
5 end
6
7 % Calculate initial losses estimate
8 ini t i a l _ l o s s _ e s t i m a t e = pd * 0.03; % Assume 3% losses
initially
6
9 target_gen = pd + i n i t i a l _ l o s s _ e s t i m a t e ;
10
11 % Adjust to meet the target generation + estimated
losses
12 if sum ( pg (1: N -1) ) > target_gen
13 % Sort by marginal cost ( descending ) to reduce most
expensive first
14 [~ , cost_order ] = sort ([2* a (1: N -1) .* pg_max (1: N -1) +
b (1: N -1) ] , ’ descend ’) ;
15
16 excess = sum ( pg (1: N -1) ) - target_gen ;
17 for idx = 1: N -1
18 i = cost_order ( idx ) ;
19 reduction = min ( excess , pg ( i ) - pg_min ( i ) ) ;
20 pg ( i ) = pg ( i ) - reduction ;
21 excess = excess - reduction ;
22 if excess <= 0
23 break ;
24 end
25 end
26 end
27
28 % Set the swing generator to balance
29 pg ( N ) = pd - sum ( pg (1: N -1) ) ; % Initial estimate without
losses
30
31 % Calculate initial losses and update swing generator
32 ploss = sum ( ploss_coeff .* pg .^2) ;
33 pg ( N ) = pd + ploss - sum ( pg (1: N -1) ) ; % Update with
losses
Listing 2: Finding a feasible initial solution
4.1.3 Main Iteration Loop
The main iteration loop calls the reduced gradient function and updates
parameters until convergence:
1 for iteration = 1: max_iterations
2 % Call reduced gradient function
3 [ pg , lambda , ploss_new ] =
r e d u c e d _ g r a d i e n t _ f u n c t i o n ( alpha , N ,
error_tolerance_reduced_gradient , ...
7
4 a, b, c,
lambda ,
ploss_coeff ,
pd , ploss ,
pf ,
pg_old ,
pg_min ,
pg_max ) ;
5
6 % Update penalty factors
7 pf_new = 1./(1 - 2* pg .* ploss_coeff ) ;
8
9 % Calculate difference in losses
10 diff_ploss = sum ( ploss_new ) - sum ( ploss ) ;
11
12 % Check convergence criteria
13 is_c onver ged_lo ss = ( abs ( diff_ploss ) <
error_tolerance_ploss_diff );
14 is_within_limits = ~ limits_violated ;
15 is_balanced = ( abs ( power_balance ) < 0.1) ;
16
17 if is_ conver ged_lo ss && is_within_limits &&
is_balanced
18 break ;
19 end
20
21 % Update for next iteration
22 ploss = ploss_new ;
23 pf = pf_new ;
24 pg_old = pg ;
25
26 % Adaptive step size adjustment
27 if iteration > 10
28 if abs ( diff_ploss ) >
e r r o r _ t o l e r a n c e _ p l o s s _ d i f f *10 ||
abs ( power_balance ) > 1
29 alpha = alpha * 0.95; % Reduce step size
30 elseif iteration > 30 && abs ( diff_ploss ) <
e r r o r _ t o l e r a n c e _ p l o s s _ d i f f *100 &&
abs ( power_balance ) < 10
31 alpha = alpha * 1.05; % Increase step size
32 alpha = min ( alpha , 0.01) ; % Cap step size
33 end
8
34 end
35 end
Listing 3: Main iteration loop
4.2 Reduced Gradient Function
The reduced gradient function implements the core optimization algorithm:
1 function [ pg , lambda , ploss_updated ] =
r e d u c e d _ g r a d i e n t _ f u n c t i o n ( alpha , N , error_tolerance ,
...
2 a, b, c,
lambda ,
ploss_coeff ,
pd ,
ploss ,
pf ,
pg_old ,
pg_min ,
pg_max )
3 % Initialize variables
4 pg = pg_old ;
5 gradient_vector = zeros ( N +1 , 1) ;
6
7 % Calculate initial losses
8 ploss_updated = sum ( ploss_coeff .* pg .^2) ;
9
10 % Reduced gradient method iterations
11 m a x _ i n n e r _ i t e r a t i o n s = 100;
12 for iteration = 1: m a x _ i n n e r _ i t e r a t i o n s
13 % Apply generator limits
14 for i = 1: N
15 if pg ( i ) < pg_min ( i )
16 pg ( i ) = pg_min ( i ) ;
17 elseif pg ( i ) > pg_max ( i )
18 pg ( i ) = pg_max ( i ) ;
19 end
20 end
21
22 % Recalculate losses
23 ploss_updated = sum ( ploss_coeff .* pg .^2) ;
24
25 % Select dependent generator
9
26 dependent_gen = N ;
27
28 % Set dependent generator to balance power
29 pg ( dependent_gen ) = pd + ploss_updated -
sum ( pg (1: N ) ) + pg ( dependent_gen ) ;
30
31 % Check dependent generator limits and
redistribute if necessary
32 if pg ( dependent_gen ) < pg_min ( dependent_gen )
33 % Handle case where dependent generator is
below minimum
34 deficit = pg_min ( dependent_gen ) -
pg ( dependent_gen ) ;
35 pg ( dependent_gen ) = pg_min ( dependent_gen ) ;
36
37 % Find generators that can increase output
38 % ... ( redistribution logic )
39
40 elseif pg ( dependent_gen ) > pg_max ( dependent_gen )
41 % Handle case where dependent generator is
above maximum
42 excess = pg ( dependent_gen ) -
pg_max ( dependent_gen ) ;
43 pg ( dependent_gen ) = pg_max ( dependent_gen ) ;
44
45 % Find generators that can decrease output
46 % ... ( redistribution logic )
47 end
48
49 % Recalculate losses
50 ploss_updated = sum ( ploss_coeff .* pg .^2) ;
51
52 % Calculate gradients for each generator
53 for i = 1: N
54 if i == dependent_gen
55 gradient_vector ( i ) = 0; % Skip
dependent generator
56 continue ;
57 end
58
59 % Skip generators at their limits
60 if pg ( i ) <= pg_min ( i ) && gradient_vector ( i )
> 0
10
61 gradient_vector ( i ) = 0;
62 continue ;
63 elseif pg ( i ) >= pg_max ( i ) &&
gradient_vector ( i ) < 0
64 gradient_vector ( i ) = 0;
65 continue ;
66 end
67
68 % Calculate marginal costs
69 dCost_i = 2* a ( i ) * pg ( i ) + b ( i ) ; %
Incremental cost of generator i
70 dCost_dep =
2* a ( dependent_gen ) * pg ( dependent_gen ) +
b ( dependent_gen ) ; % Incremental cost of
dependent generator
71
72 % Calculate loss sensitivities
73 dLoss_i = 2* ploss_coeff ( i ) * pg ( i ) ; % Change
in losses due to generator i
74 dLoss_dep =
2* ploss_coeff ( dependent_gen ) * pg ( dependent_gen ) ;
% Change in losses due to dependent
generator
75
76 % Calculate penalty factors
77 pf_i = 1/(1 - dLoss_i ) ;
78 pf_dep = 1/(1 - dLoss_dep ) ;
79
80 % Calculate reduced gradient
81 gradient_vector ( i ) = pf_i * dCost_i -
pf_dep * dCost_dep ;
82 end
83
84 % Power balance constraint gradient
85 gradient_vector ( N +1) = sum ( pg ) - ( pd +
ploss_updated ) ;
86
87 % Update generators using gradient descent
88 max_gradient = 0;
89 for i = 1: N
90 if i == dependent_gen
91 continue ; % Skip dependent generator
92 end
11
93
94 % Only update if not at limits % Only update
if not at limits or if gradient pushes
away from limit
95 if ( pg ( i ) > pg_min ( i ) && pg ( i ) < pg_max ( i ) )
|| ...
96 ( pg ( i ) <= pg_min ( i ) &&
gradient_vector ( i ) < 0) || ...
97 ( pg ( i ) >= pg_max ( i ) &&
gradient_vector ( i ) > 0)
98
99 step = alpha * gradient_vector ( i ) ;
100 pg ( i ) = pg ( i ) - step ;
101
102 % Apply limits after update
103 if pg ( i ) < pg_min ( i )
104 pg ( i ) = pg_min ( i ) ;
105 elseif pg ( i ) > pg_max ( i )
106 pg ( i ) = pg_max ( i ) ;
107 end
108 end
109
110 % Track maximum gradient for convergence
check
111 max_gradient = max ( max_gradient ,
abs ( gradient_vector ( i ) ) ) ;
112 end
113
114 % Update lambda ( Lagrange multiplier )
115 lambda = lambda + alpha * gradient_vector ( N +1) ;
116
117 % Recalculate dependent generator and losses
118 ploss_updated = sum ( ploss_coeff .* pg .^2) ;
119 pg ( dependent_gen ) = pd + ploss_updated -
sum ( pg (1: N ) ) + pg ( dependent_gen ) ;
120
121 % Apply limits to dependent generator
122 if pg ( dependent_gen ) < pg_min ( dependent_gen )
123 pg ( dependent_gen ) = pg_min ( dependent_gen ) ;
124 elseif pg ( dependent_gen ) > pg_max ( dependent_gen )
125 pg ( dependent_gen ) = pg_max ( dependent_gen ) ;
126 end
127
12
128 % Check convergence
129 power_balance = sum ( pg ) - ( pd + ploss_updated ) ;
130 if max_gradient < error_tolerance &&
abs ( power_balance ) < error_tolerance
131 break ;
132 end
133 end
134
135 % Final recalculation of losses
136 ploss_updated = sum ( ploss_coeff .* pg .^2) ;
137 end
Listing 4: Reduced gradient function implementation
5 Key Algorithmic Features
5.1 Initial Solution Feasibility
The algorithm starts with a feasible solution by:
• Setting generators to their maximum capacity (except the last one)
• Estimating losses
• Balancing the system using the swing generator
• Checking and adjusting if the swing generator violates its limits
This initialization approach ensures that the algorithm begins from a valid
point in the feasible region, which helps with convergence. Starting with
generators at their maximum capacities allows the algorithm to determine
if capacity constraints are binding and provides a known reference point for
adjustments.
5.2 Adaptive Step Size
The algorithm uses an adaptive step size to improve convergence:
• Decreases step size when changes in losses are large or power balance
is poor
• Increases step size when convergence is slow but stable
• Caps the maximum step size to prevent oscillation
13
The adaptive step size mechanism is crucial for balancing between speed
and stability of convergence. When the solution is far from optimal, larger
steps help reach the vicinity of the optimum quickly. As the solution ap-
proaches the optimum, smaller steps provide more precise convergence with-
out overshooting.
5.3 Penalty Factors
Penalty factors account for the effect of losses on incremental costs:
1
P Fi = (10)
1 − 2Bii PGi
These factors adjust the incremental costs to reflect the true cost of de-
livering power to the load. Penalty factors are essential in systems with
significant transmission losses, as they ensure that the optimization accounts
for the additional generation needed to compensate for these losses. Without
penalty factors, the solution would underestimate the true cost of generation.
5.4 Dependent Generator Selection
The algorithm selects one generator (usually the last one) as the dependent
variable, which is adjusted to satisfy the power balance constraint. If this
generator violates its limits, the excess or deficit is redistributed among other
generators based on their incremental costs.
Choosing a dependent generator reduces the dimensionality of the prob-
lem and ensures that the power balance constraint is always satisfied during
the optimization process. The redistribution mechanism handles cases where
the dependent generator cannot alone satisfy the balance constraint due to
its capacity limits.
5.5 Convergence Criteria
The algorithm checks for convergence using multiple criteria:
• Small gradients (optimality)
• Small changes in losses (stability)
• Power balance (feasibility)
• Generators within limits (constraints)
14
Using multiple convergence criteria ensures that the final solution is not
only optimal but also feasible and stable. The algorithm terminates only
when all criteria are satisfied, providing a robust solution to the economic
load dispatch problem.
6 Example Test Case
The test case provided in the code has the following parameters:
6.1 Generator Data
Generator a b c Min (MW) Max (MW) B
1 0.004 5.3 500 200 450 0.00003
2 0.006 5.5 400 150 350 0.00009
3 0.009 5.8 200 100 225 0.00012
Table 1: Generator Parameters for the Test Case
6.2 System Demand
Total Load Demand: 975 MW
6.3 Expected Results Analysis
For this test case, we would expect the following characteristics in the optimal
solution:
• Generator 1 will likely operate at a higher output due to its lower
quadratic cost coefficient (a1 = 0.004)
• Generator 3 will likely operate at a lower output due to its higher
quadratic cost coefficient (a3 = 0.009)
• The penalty factors will be higher for generators with larger loss coef-
ficients
• The adjusted incremental costs (including penalty factors) should be
approximately equal at the optimal operating point
• The total system losses will typically be around 2-5% of the total de-
mand
• The optimal solution should satisfy all generator limits
15
7 Visualization and Analysis
The code includes visualization of:
• Generator outputs compared to their limits
• Cost curves for each generator
• Incremental cost curves
• Operating points on the cost curves
These visualizations help in understanding the economic operation of the
power system and verifying that the solution satisfies the equal incremental
cost criterion, adjusted for losses. Visual analysis provides intuitive confir-
mation that the numerical solution is correct and allows quick identification
of potential issues, such as generators operating at their limits.
7.1 Interpretation of Results
When interpreting the results, several key indicators should be examined:
• Generator Operating Points: Check if any generators are at their
limits. If so, the equal incremental cost criterion may not apply to
these generators.
• Incremental Costs: The adjusted incremental costs (including penalty
factors) should be approximately equal for all generators not operating
at their limits. This confirms the optimality of the solution.
• System Losses: The percentage of losses relative to the total de-
mand provides an indication of the efficiency of the dispatch. High
losses might suggest that a different dispatch strategy could be more
economical.
• Total Generation Cost: This is the primary objective function and
should be minimized. Comparing this cost with alternative dispatch
strategies confirms the effectiveness of the optimization.
8 Conclusion
The reduced gradient method provides an effective approach to solving the
Economic Load Dispatch problem with transmission line losses. The imple-
mentation includes:
16
1. Proper handling of generator limits
2. Accurate modeling of transmission losses
3. Adaptive step size for improved convergence
4. Multiple convergence criteria for solution quality
5. Visualization tools for result analysis
This implementation can be extended to include additional constraints
such as ramp rate limits, prohibited operating zones, and multiple fuels by
modifying the gradient calculation and constraint handling.
8.1 Future Enhancements
Several enhancements could further improve the algorithm:
• Full B-matrix representation: Incorporating the full B-matrix for
loss modeling would provide more accurate results for complex power
systems with significant cross-coupling between generators.
• Valve-point effects: Including valve-point loading effects in the cost
function would provide a more realistic representation of thermal gen-
erator characteristics.
• Multi-objective optimization: Extending the algorithm to consider
both cost and emissions could provide environmentally friendly dis-
patch solutions.
• Integration with renewable sources: Incorporating the stochastic
nature of renewable energy sources would make the algorithm applica-
ble to modern power systems with high renewable penetration.
• Security constraints: Adding line flow constraints would ensure that
the dispatch solution does not violate transmission system security lim-
its.
The reduced gradient method, with its ability to handle constraints effec-
tively, provides a strong foundation for these extensions, making it a valuable
tool for power system operation and planning.
17