Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
86 views7 pages

State Estimation Distributed Processing: Reza Ebrahimian and Ross Baldick

Uploaded by

alimaghami
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views7 pages

State Estimation Distributed Processing: Reza Ebrahimian and Ross Baldick

Uploaded by

alimaghami
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

1240 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 15, NO.

4, NOVEMBER 2000

State Estimation Distributed Processing


Reza Ebrahimian and Ross Baldick

Abstract—This paper presents an application of a parallel algo- where:


rithm to Power Systems State Estimation. We apply the Auxiliary is the objective function,
Problem Principle to develop a distributed state estimator, demon- ,
strating performance on the Electric Reliability Council of Texas
(ERCOT) and the Southwest Power Pool (SPP) systems. is a vector of variances of the measurement
errors,
is a vector of functions describing the mea-
I. INTRODUCTION
surements,

T O HOST SCADA and Energy Management System soft-


ware for operations of power systems, utilities have his-
torically used mid-size computers to handle the tasks automati-
is a vector of the measurements,
is a vector of the voltage magnitudes and an-
gles, and
cally and to provide interface for real-time interactive interven- superscript denotes transpose.
tion by the operating staff in the control center of a control area. Therefore, if the system is observable then the Gauss–Newton
However with advancements of small computer technologies update equations [16] for this nonlinear optimization problem
and networking, it is becoming attractive to use distributed pro- are:
cessing. Although the emerging structure of the “independent
system operator” (ISO) may link several utilities, distributed
computing is likely to be preferable compared to a completely
centralized implementation.
Traditionally, the maximum likelihood weighted
least-squares method is applied to the state estimation where:
problem, yielding a formulation that is an approximately is the Jacobian of vector ,
quadratic and convex problem, which typically has a single and are vectors of voltage magnitude and angles, and
optimum solution. The novelty of our research relates to the superscript in parenthesis indicates the iteration
development of algorithms for distributed processing. We count, so that
apply the “Auxiliary Problem Principle” (APP) [4] to the state is the value of iterate at the th iteration.
estimation problem.
In Section II, we develop and present the mathematical equa- B. Distributed State Estimation
tions necessary to apply the APP to form the distributed algo-
1) Problem Illustration and Formulation: Based on pre-
rithm. In Section III, we describe the use of MATLAB [7], [8]
vious experience of applying APP to the Optimal Power Flow
to develop first centralized and then distributed state estimation
(OPF) problem [10], [11], we applied APP to distributed
software for comparison purposes. Several test case studies rep-
state estimation. To maximize the practical applicability, we
resenting ERCOT and SPP systems demonstrate the effective-
formulated the problem such that: it is highly compatible to
ness of the algorithm and we discuss bad data detection. We
previous real world implementations; only a small amount of
conclude in Section IV.
inter-processor communication is required; bad data analysis
could be performed; and, such that the distributed state esti-
II. DECOMPOSITION AND DISTRIBUTED IMPLEMENTATION mator yields the same solution as centralized state estimation.
A. Centralized State Estimation We believe that this is a significant achievement compared with
other approaches to distributed and hierarchical state estimation
Traditional maximum likelihood weighted least-squares state
such as described in [2], [3], [5], [6], [9], [12]–[14], [17], [18].
estimation calculates the voltage magnitudes and angles (as-
In the following paragraphs we will describe the applica-
suming a voltage angle reference). In this method the objective
tion of the Auxiliary Problem Principle to the state estimation
is to minimize the sum of the squares of the weighted deviations
problem. This description is paraphrased from [10], [11], but
of the estimated variables from the actual measurements [16]
ultimately depends on the properties of linearized augmented
Lagrangians described in [4].
(1)
To develop a distributed state estimator consider Fig. 1, which
shows a 3-bus system lying in Areas and B. This system in-
Manuscript received May 14, 1999; revised February 23, 2000. R. Baldick cludes the border bus between areas A and B that is common
was supported in part by the National Science Foundation under Grant ECS- to both areas.
9457133.
The authors are with The University of Texas at Austin, TX 78712. Let be the vector of voltage magnitudes and angles for Area
Publisher Item Identifier S 0885-8950(00)10354-2. A, including the voltage magnitude and angle at the
0885–8950/00$10.00 © 2000 IEEE
EBRAHIMIAN AND BALDICK: STATE ESTIMATION DISTRIBUTED PROCESSING 1241

is an iterative algorithm involving linearizing the augmented


Lagrangian. This will yield two sub-problems and a multiplier
update for evaluating and at each successive iteration of
the form:

Fig. 1. Border bus between areas A and B.

border. Let be the vector of voltage angles and magnitudes


for Area B, including the voltage magnitude and angle
(2)
at the border (this convention is slightly different to that in [11]).
That is, and both include the voltage angle and magnitude
at the border. We must require that and for
and to be consistent. Further, we must also require that the
real and reactive flow across the border be consistent. Now we
can express the objective function in (1) as: . (3)
Consider the real and reactive power flow and the voltage (4)
angle and magnitude at the border. We can express these quanti-
ties in terms of . That is, we can find the function such that where:
the vector of real and reactive power flows and voltage angles is the iteration number,
and magnitudes at the border are given by: is the Lagrange multiplier, and
and are constants that must lie in particular ranges to
guarantee convergence and can be tuned by trial and
The last two entries of simply pick out the two appropriate error.
entries of corresponding to the voltage magnitude and angle. The distributed implementation would require solving (2) or
That is and . The first two entries (3) separately in each area, exchanging border values between
of evaluate the real and reactive flow across the borders in areas, and updating according to (4).
terms of the vector . Similarly, we can find a function that 2) Mathematical Developments and Implementations: At
expresses these same quantities in terms of : the core of this software, we use the equations and algorithms of
centralized state estimation. Prototype software was developed
for centralized state estimation, considering sparsity issues and
where and . For a valid solution, information matrix conditioning techniques. These features
and must be such that the real and reactive power and angle carry over to the distributed implementation.
and magnitude match at the border. That is: we must require To develop the necessary equations for solving distributed
. Then the maximum likelihood weighted least- state estimation problems we consider area (all the develop-
squares problem is: ment will also apply to area , mutatis mutandis). The objective
in (2) is:

To solve this problem we will dualize the constraint


; however, to improve convergence we add a quadratic
term. The problem becomes: (5)
where, is the objective function for the area . If area
owns ties then the augmented Lagrangian of (5) would be-
come, after some manipulations as shown below in (6).
where is any positive constant. Note that the added quadratic
The last term in (6) is constant, therefore, its derivative is zero.
term does not change the problem because at any solution:
If we define virtual measurement as:
. Now to separate this objective for a distributed
implementation we apply a decomposition algorithm referred
to as the Auxiliary Problem Principle (APP) [1], [4], which

(6)
1242 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 15, NO. 4, NOVEMBER 2000

then the optimization problem of (6) is equivalent to:

(7)
where:
is the number of actual measurements in area ,
is the number of virtual measurements introduced to
reflect the APP terms into the form of the measure-
ments equations,
is the measurement function corresponding to ,
and Fig. 2. Distributed state estimation algorithm.
.
The most important observation here is that this is now a state (12)
estimation problem with virtual measurements at the border. where
The problem in (7) can be described as the minimum of the
multiplication of the following matrices:
------

------ ------ (8)


and is iteration number for each individual area, and is the
or iteration number for multiple area parallel solutions associated
(9) with APP algorithm. The counter increments when all areas
reach convergence. We refer to as the outer loop iteration pa-
where: is the vector of measurement equations for area , rameter and we refer to as the inner loop iteration counter.
is a diagonal matrix of the virtual variances and, Hence, at each Lagrange multiplier update, each area is
solved separately with , iterations. Using (10)–(12) we
--- --- can solve power system state estimation problem in a distributed
fashion. In the next subsection, we present the algorithm that uti-
lizes these equations.
To further describe the virtual measurements at the border
virtual buses, we present the border virtual measurements as C. Distributed State Estimation Algorithm and Communication
follows: Issues
We present the algorithm to implement the distributed state
estimation in Fig. 2, which is essentially the same as the OPF
algorithm described in [10], [11]. The measurements for each
area are telemetered to the computer for that area only, and only
the computed border variables are exchanged between the ad-
jacent areas. The neighboring areas will interchange the border
variables at each iteration and calculate the updated variables
for the next iteration.
At each iteration, a central computer will be informed of the
status of each area for convergence. If all areas have met the con-
To solve (9), which describes the implementation of distributed vergence criteria, then the central computer will inform all areas
state estimation, we define as a Jacobian matrix of size that the entire interconnection has converged. In our implemen-
(where is the number of state variables) and of tation, we used a central controlling computer to perform these
the following form: exchanges; however, they could be implemented with commu-
nication between adjacent areas only.
The amount of data communicated between each area and
the central computer is very small. It is equal to the number
of ties times 4 (voltage angle, voltage magnitude, real and re-
We update according to: active branch flows). The amount of computations conducted
by this central computer at each iteration (basically checking
for convergence at each iteration) is very small. Therefore the
With some mathematical manipulation, we can show that the
time required for telecommunications and computation at each
iterative solution for (8) is of the form:
iteration is likely to be negligible compared to the time to per-
(10) form state estimation for an area. With a traditional centralized
implementation the typical length of the communication path
(11) for telemetering data is on the order of the radius of the entire
EBRAHIMIAN AND BALDICK: STATE ESTIMATION DISTRIBUTED PROCESSING 1243

TABLE I
CASE STUDY SYSTEMS

system, whereas in our distributed implementation the typical


length of data path would be on the order of the radius of the
regions. Telecommunication bottlenecks should be less signifi-
cant in the case of the distributed implementation, even with the
small amount of additional border data exchange.

Fig. 3. Diagram of the ER-COT 2529 bus sytem interconnection after


III. CASE STUDIES AND RESULTS decomposition.

To illustrate the convergence properties and the effectiveness


of this algorithm we present several case studies using the Elec- loop iterations. Systematic approaches to dividing systems that
tric Reliability council of Texas (ERCOT) and Southwest Power yield optimum results is an area of future research.
Pool (SPP) systems. Table I gives a summary of the case study We divide the ERCOT 2529 bus, ERCOT 4972 bus and the
systems where: column 2 is the total number of buses including SPP 8047 bus representation systems into 8 areas for our dis-
the border buses; column 3 is the number of areas; column 4 lists tributed implementation. In the division of the ERCOT sys-
the number of branches in each area; column 5 lists the number tems, we leave most of the larger areas in their original divi-
of buses in each area; column 6 shows the total number of ties sions based on constituent companies and combine some of the
between the areas and the last column shows the total number smaller areas; however, we break the largest area into two areas.
of branches. Case studies 1, 2, 3, and 4 present the division of If we encounter islands in any area after the division, we rear-
a 2529 bus representation of the ERCOT system into 8 areas range the areas to ensure an internally interconnected system.
starting with 2 areas then adding 2 areas until completion of the Fig. 3 shows the diagram of the ERCOT 2529 bus system inter-
entire interconnection. These are essentially the same systems connection after decomposition. We will use this case in Sec-
studied in [10], [11]. In addition, Case 5 is a 4972 bus repre- tion III-D to present the performance of the distributed state
sentation of the ERCOT system decomposed into 8 areas. The estimator relating to bad data detection. For the SPP 8047 bus
two ERCOT (2529 bus and 4972 bus systems) cases do not have representation system, first we combined the contiguous areas
equivalent configurations and are not derived from each other. without breaking any of the areas such that the number of buses
Case 6 is an 8047 bus representation of the SPP [15] system de- in each area is greater than 500 and less than 2000. We check
composed into 8 areas. These case studies intend to show the ef- each area for possible islands, and if we encounter islands we re-
fectiveness of the algorithm with practical large-scale systems. arrange the areas such that an internally interconnected system
emerges.
A. Regionalization
B. Performance of Distributed State Estimation Algorithm
The speed of solution of the distributed state estimator is pre-
dominantly a function of the speed of the slowest system to We have developed prototype software in MATLAB and the
reach a solution and the number of the outer loop iterations as- following presentations are results produced using this software.
suming that the inter-processor communication is relatively fast. We define actual wall clock time as the wall clock time for two
The solution time for each area is not only a function of the to eight separate computers to solve the distributed state estima-
size of the system, but also its configuration, the types, loca- tion. This is equal to the summation over all the outer iterations
tion and number of measurements, initial values, bad data and of the wall clock time for the slowest converging area at each
noise. The number of outer loop iterations depends on each in- outer iteration plus the time required for the central computer
dividual system’s configuration and how all areas interconnect. to check for convergence at each outer iteration. The case study
If we divide the system such that there is not much difference results presented in this paper are conducted over a local area
between most areas’ solution times, then the performance of network with negligible communication delays. However, in an
the distributed state estimator becomes largely a function of the actual implementation for a large power system network, com-
number of the outer loop iterations. The following paragraphs munication bottlenecks can increase the wall clock time for both
describe our approach to dividing systems. We devised these the centralized as well as the distributed implementations, with
methods of decomposition based on trial and error to reach a potential communication delays being worse for the centralized
reasonable individual area solution time and number of outer implementation.
1244 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 15, NO. 4, NOVEMBER 2000

Fig. 4. Wall clock versus number of buses for distributed and centralized Fig. 5. Megaflops versus number of buses for distributed and centralized
implementation (Sun Ultra workstations). implementation (Sun Ultra workstations).

Table II shows the number of ties, state variables, measure-


Fig. 4 shows the wall clock time versus the number of buses
ments, outer loop iterations and the redundancy for each case.
for both the centralized and distributed implementations for the
Redundancy is the ratio of the number of the measurements to
case studies of Table I. The actual wall clock time is approx-
the number of the state variables. The total number of state vari-
imately equal to the wall clock time for the computer in the
ables in the distributed implementation is one more than the
slowest converging area. For all case studies, the distributed im-
centralized implementation because we do not have a reference
plementation time is less than the centralized. Further, as the
voltage angle in the distributed implementation. We calculate
number of buses increases, the advantage of the distributed im-
the final angles based on an assumed reference angle. That is,
plementation over the centralized becomes more pronounced.
after reaching a solution, the computed “reference bus” angle
The wall clock time presented here using the MATLAB in-
is the amount that all angles may be shifted to make the angle
terpretive language is significantly higher than the wall clock
estimation comparable to the centralized implementation with
time using compiled code and is not reflective of performance
an actual reference angle. For uniformity we choose as mea-
in a production environment. However, it is reasonable that the
surements the bus voltage magnitude of all the generators (with
results presented here are proportional to conducting the same
an error standard deviation of 0.002) and the real and reactive
computations with compiled codes and provide qualitative in-
branch flows in both directions of selected branches (with an
formation that is useful in judging the performance of an ef-
error standard deviation of 0.02).
ficient implementation. Two factors that might affect this pro-
portionality are use of loops, and memory fragmentation caused C. Convergence and Stopping Criterion
by resizing arrays within a MATLAB programs. We have taken
special care to replace for and while loops with “vectorized” To guarantee convergence, and reduce the number of outer
code, and preallocate large arrays to avoid memory fragmen- loop iterations, , , and must lie in particular ranges for
tation. For very large systems beyond 8000 buses, the plot of different systems. Using trial and error we found that for
Fig. 4 may suggest a speed up of greater than 8. However, this 1, and 2 the convergence is reliable for the systems
may not be indicative of actual implementations; therefore the presented in this paper. In most cases, by the second iteration,
plot should not be extrapolated to larger systems. more than 90% of the border variables converge within a 0.001
Fig. 5 shows the megaflops versus the number of buses for per unit tolerance for voltage magnitude and real and reactive
both the centralized and distributed implementations for the case flows, and within a 0.03 radian tolerance for voltage angle. In
studies of Table I. The megaflops for the distributed implemen- all of our cases, all of the variables reach convergence within
tation are maximum megaflops similar to the actual wall clock in this tolerance in a maximum of 6 iterations. After completion
Fig. 4, where the maximum megaflops is the summation over all of the solution, over 99% of the state variables are within 0.1%
the outer iterations of the largest total megaflops over individual difference from the centralized implementation.
areas at each outer iteration plus the total megaflops for the
convergence check at each outer loop iteration. The megaflops D. Implementation of Bad Data Detection in the Distributed
for the distributed in all cases are less than the megaflops for State Estimator
the centralized because we combine megaflops by adding the Practical state estimators require detection of bad data to im-
largest number of megaflops over the processors for each iter- prove the accuracy of their information and to avoid divergence.
ation. The floating point operations for the centralized imple- The sum of the square of the residuals calculated after the
mentations of ERCOT 2529 and ERCOT 4972 systems appear convergence is small if there are no bad measurements present.
to be almost equal. This is due to the configurations of these In the presence of bad data will be large. Traditionally the
systems, and information matrix preconditioning and sparsity normalized measurement residual [16] is used to detect
techniques that we have employed. However, typically, as the bad data and is calculated as:
number of buses increases, the number of floating point opera-
tions also increases.
EBRAHIMIAN AND BALDICK: STATE ESTIMATION DISTRIBUTED PROCESSING 1245

TABLE II
CASE STUDIES OUTER LOOP ITERATIONS

TABLE III TABLE IV


ONE VOLTAGE MAGNITUDE AND ONE BRANCH REAL FLOW BAD DATA (TOTAL THREE VOLTAGE MAGNITUDE AND THREE BRANCH REAL FLOW BAD DATA
OF TWO) IN AREA A, UP TO ALL AREAS (TOTAL OF SIX) IN EACH AREA AT A TIME

where: representation system using the distributed and centralized al-


is the estimate, gorithms with one voltage magnitude and one branch real flow
is the measurement, and gross error in each affected area starting with 2 gross errors in
area in the case given in row 1, and increasing to 2 gross er-
is the standard deviation of the measurement residual rors in each area in the case given in row 8. This table shows
. that both the wall clock and megaflops performances of the dis-
If the absolute value of is greater than three, then the tributed implementation are better than the centralized in the
associated measurement is assumed to be wrong and is removed presence of bad data. It shows that the centralized wall clock
from the measurements and the state estimator is resolved. and megaflops increase rapidly as the number of bad data in-
We have implemented this method of bad data detection to crease; whereas, with the distributed implementation the wall
demonstrate the performance of the distributed state estimator in clock and megaflops increase only when the slow converging
the presence of bad data. This method, although not the fastest, areas contain bad data. Even in that case the increase is much
is reliable and provides a fair comparison with the centralized less than the centralized implementation.
state estimator. Table IV shows the wall clock time and megaflops for the
Each area at each outer loop iteration is examined for bad ERCOT 2529 bus, 8 area using the distributed and centralized
data. In all of the cases presented in this paper the distributed algorithms with 6 gross errors (3 voltage magnitude errors, and
state estimator has detected bad data at the first outer loop it- 3 branch real flow errors) in each area. This table perhaps shows
eration. However, in a worst case scenario, it is possible for the worst case performance scenario for the distributed imple-
the bad data to not be detected until the last outer loop itera- mentation because each area containing the gross errors would
tion. Even with such a scenario, the distributed implementation solve seven times and assuming three inner loop iterations, this
solves faster than the centralized for a realistic number of bad would result in a total of 21 iterations. However, even with this
data. scenario, the wall clock and megaflops performance of the dis-
In a distributed implementation if the gross errors are spread tributed are, by far, better than the centralized.
out in different areas then bad data detection will take much less With the distributed implementation another issue is the pres-
time than the centralized implementation because solving one ence of bad data near to the border buses and the ability to detect
area out of all the areas is much faster than resolving the en- these accurately, since the information required to detect the bad
tire system, so long as the gross errors can be reliably detected data may need to “propagate” through the virtual measurements
by each individual area. Even if all the gross errors are within from an adjacent area. However, in our experience with our case
one area, the distributed implementation reaches a final solution studies this implementation detects bad data easily even if it is
faster then the centralized so long as the bad data can be identi- very close to the border buses.
fied and discarded reliably. Table III shows the wall clock time Using the ERCOT 2529 bus, 8 area, Table V examines the
and floating point operations for the ERCOT 2529 bus, 8 area ability of the distributed implementation to detect bad data from
1246 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 15, NO. 4, NOVEMBER 2000

TABLE V REFERENCES
BAD DATA DETECTION, AWAY FROM AND AT BORDERS
[1] J. Batut and A. Renaud, “Daily generation scheduling optimization with
transmission constraints: A new class of algorithms,” IEEE Trans. on
Power Systems, vol. 7, no. 3, pp. 982–989, Aug. 1992.
[2] C. W. Brice and R. K. Cavin, “Multiprocessor static state estimation,”
IEEE Trans. on Power Apparatus and Systems, vol. PAS-101, no. 2, pp.
302–308, Feb. 1982.
[3] K. A. Clements, O. J. Denison, and R. J. Ringlee, “A multi-area approach
to state estimation in power system networks,” in IEEE/PES Summer
Meeting, July 1972, Paper C72 465-3.
[4] G. Cohen, “Auxiliary problem principle and decomposition of optimiza-
tion problems,” Journal of Optimization Theory and Applications, vol.
32, no. 3, pp. 277–305, Nov. 1980.
[5] T. Van Cutsem, J. L. Horward, and M. Ribben-Pavella, “A two-level
static state estimator for electric power systems,” IEEE Trans. on Power
Apparatus and Systems, vol. PAS-100, no. 8, pp. 3722–3732, Aug. 1981.
[6] T. Van Cutsem and M. Ribben-Pavella, “Critical survey of hierarchical
meters that are located close to the borders with other areas and methods for state estimation of electric power systems,” IEEE Trans. on
Power Apparatus and Systems, vol. PAS-102, no. 10, pp. 3415–3424,
the ability to detect bad data from meters that are away from Oct. 1983.
the borders. Cases 1 through 4 show results with bad data in [7] D. M. Etter, Engineering Problem Solving with MATLAB. Englewood
area at the border with area , cases 5 through 8 show re- Cliffs, NJ: Prentice Hall, 1993.
[8] W. Gander and J. Hrebicek, Solving Problems in Scientific Computing
sults with bad data in area away from its borders. For the dis- Using Maple and MATLAB, 2nd ed. Berlin/Heidelberg/New York:
tributed implementation, the number of inner iterations for area Springer-Verlag, 1993.
increases by the number of bad data times three, since it takes [9] 94 304H. Mukai, “Parallel multiarea state estimation,” EPRI, Palo Alto,
CA, Technical Report 1764-1, Jan. 1982.
three inner loop iterations to solve area . However, for the cen- [10] B. Kim, “A study on distributed optimal power flow,” Ph.D. dissertation,
tralized implementation, the number of iterations increases by University of Texas at Austin, Austin, 1997.
the number of gross errors times three (since it takes three itera- [11] B. H. Kim and R. Baldick, “Coarse-grained distributed optimal power
flow,” IEEE Trans. on Power Systems, vol. 12, no. 2, pp. 932–939, May
tions to solve the centralized state estimator). Table V shows the 1997.
corresponding wall clock times and megaflops. Presence of bad [12] H. Kobayashi, S. Narita, and M. S. A. A. Hamman, “Model coordination
injection measurements data instead of bad flow measurements method applied to power system control and estimation problems,” in
Proceedings of the 4th International Conference on Digital Computer
data yields similar results. In summary we have shown that the Applications to Process Control, 1974, pp. 114–128.
distributed implementation detects bad data effectively and in all [13] S. Y Lin, “A distributed state estimator for electric power systems,”
cases studied with less effort than a centralized implementation. IEEE Trans. on Power Systems, vol. 7, no. 2, pp. 551–557, May 1992.
[14] S. Y. Lin and C. H. Lin, “An implementable distributed state estimator
and distributed bad data processing schemes for electric power systems,”
IEEE Trans. on Power Systems, vol. 9, no. 3, pp. 1277–1284, Aug. 1994.
IV. CONCLUSIONS AND FURTHER STUDIES [15] S. Miller, “Transmission access information Library,” Commonwealth
Associates Inc., Jackson, MI, Sept. 1997.
In this paper, we have shown a robust distributed algorithm [16] A. J. Wood and B. F. Wollenberg, Power Generation, Operation, and
Control, 2nd ed. New York, NY: Wiley, 1996.
for power system state estimation with a minimal amount of [17] C.-C. Yang and Y.-Y. Hsu, “Estimation of line flows and bus voltages
modification required to existing state estimators, and demon- using decision trees,” IEEE Trans. on Power Systems, vol. 9, no. 3, pp.
strated its effectiveness on ERCOT and SPP systems. With 1569–1574, Aug. 1994.
[18] J. Zaborsky, K. W. Whang, and K. V. Prasad, “Ultra fast state estimation
deregulation in the United States and the emergence of ISOs, for the large electric power system,” IEEE Trans. on Automatic Control,
large scale state estimation will become necessary to ensure vol. AC-25, no. 4, pp. 839–841, Aug. 1980.
secure operation of the electric power interconnections. Dis-
tributing the calculations onto multiple processors will become
increasingly important. To our knowledge, our implementation Reza Ebrahimian received his B.S. and Masters degrees in electrical engi-
neering from Texas A&M University, and his Ph.D. degree from the Univer-
of a distributed state estimator is the most practical and realistic sity of Texas at Austin. He is currently a Senior Consulting Engineer at Austin
that has been presented in the literature so far. In future studies, Energy.
we would like to investigate the characteristics of each area
and the ties between them as they relate to the convergence
properties of the entire system. It may be potentially possible to Ross Baldick received his B.Sc. and B.E. degrees from the University of
Sydney, Australia and his M.S. and Ph.D. degrees from the University of
reduce the number of outer loop iterations by some variations California, Berkeley. He is currently an Associate Professor in the Department
in the algorithm and system decomposition. of Electrical and Computer Engineering at the University of Texas at Austin.

You might also like