TAMPERE UNIVERSITY OF TECHNOLOGY POWER ENGINEERING GROUP

TAMPERE UNIVERSITY OF TECHNOLOGY POWER ENGINEERING GROUP Report 10-96 Sami Repo and Juhani Bastman Applicability of neural network in power system ...
Author: Lindsay Sims
5 downloads 1 Views 746KB Size
TAMPERE UNIVERSITY OF TECHNOLOGY POWER ENGINEERING GROUP

Report 10-96

Sami Repo and Juhani Bastman

Applicability of neural network in power system computation

Tampere 1996

TAMPERE UNIVERSITY OF TECHNOLOGY POWER ENGINEERING GROUP

Report 10-96

Sami Repo and Juhani Bastman

Applicability of neural network in power system computation

Tampere 1996

Repo, Sami and Bastman, Juhani Applicability of neural network in power system computation Tampere : Tampere University of Technology, 1996, 44 pages. (Report / Tampere University of Technology, Power engineering group ; 10-96 ISBN 951-722-601-2 ISSN 1235-2608 UDK 621.31

PREFACE

This report is a preliminary survey of neural networks in power system voltage stability monitor and control. The aim of this study is to give a short introduction to the theory of neural networks and present some ideas about neural network based modelling with a state estimation approach. Neural network based voltage stability management will be studied in the future. Short descriptions about developed ideas is presented in the last chapter.

The study is part of larger research project ”Maintaining the operation of power systems under severe disturbances”. During this project the critical fault conditions for the main transmission grid are simulated and methods for optimising load shedding and the splitting of transmission networks are developed. Recently met-hods for calculating the point of collapse and many static voltage stability indices have been tested and compared.

The aim of this new project is to study the possibilities of the neural networks in power system voltage stability management. Methods mainly studied are related to power system operation, but some methods can be also used at the power system planning. First a new and fast neural network based voltage stability index is created. This is needed in power system security assessment. After that a voltage stability control system will be developed.

Tampere, October 1996

Sami Repo, M.Sc. Eng.

Juhani Bastman, Lic. Tech.

I

CONTENTS PREFACE................................................................................................... I CONTENTS................................................................................................II ABSTRACT................................................................................................ IV 1. INTRODUCTION..................................................................................1 2. NEURAL NETWORKS........................................................................ 4 2.1 Structure of the multilayer perceptron network..................... 4 2.1.1 One-layer perceptron...........................................................4 2.1.2 Multilayer perceptron..........................................................4 2.1.3 Function approximator........................................................5 2.1.4 Optimal network structure.................................................. 6 2.2 Training algorithms................................................................... 6 2.2.1 Back-propagation................................................................ 7 2.2.2 Momentum and adaptive learning rate............................... 7 2.2.3 Optimization....................................................................... 8 3. STATE ESTIMATION..........................................................................9 3.1 Power system state estimation................................................... 9 3.2 Least squares method.................................................................10 3.2.1 Error, measurement and estimate........................................10 3.2.2 Calculation of the state variable estimate........................... 11 3.3 Bad data detection...................................................................... 13 4. NEURAL NETWORK IN STATE ESTIMATION............................ 15 4.1 State estimation...........................................................................15 4.1.1 Results of 9 bus test network.............................................. 16 4.1.2 Results of 14 bus test network............................................ 18 4.2 Reduction of training time.........................................................20 4.2.1 Reduction of input nodes.................................................... 20 4.2.2 Reduction of connections....................................................21 4.2.3 Other training algorithms.................................................... 23

II

4.3 Applicability to practical size power system............................ 24 4.4 Conclusions................................................................................. 25 5. VOLTAGE STABILITY MANAGEMENT........................................26 5.1 Risk management...................................................................... 26 5.2 Voltage collapse.......................................................................... 27 5.2.1 Bifurcation point................................................................. 31 5.3 Voltage stability analysis............................................................32 5.4 Voltage stability monitoring and control................................. 34 5.4.1 Corrective control to avoid voltage collapse...................... 34 5.4.2 A new real time voltage stability monitor and control system................................................................................. 35 6. CONCLUSIONS.................................................................................... 37 BIBLIOGRAPHY...................................................................................... 39 APPENDIX................................................................................................. 41

III

ABSTRACT The neural network is a black box modelling tool. Its efficiency is based on its parallel computation capability. The neural network is capable of modelling non-linear cases and its output can be calculated extremely fast. Mathematically speaking it is capable of modelling any function. The neural network can also be used as a classifier, for example in the decision-making process.

Various applications based on neural networks have been reported in the power system area. Applications are developed for load forecasting, security assessment, protection, control, fault diagnosis, alarm processing, system identification and operational planning. The most common neural network has been a multilayer perceptron network.

First a short introduction to the theory of neural networks is presented in Chapter Two. The multilayer perceptron network is chosen for a more precise examination. Both network structure and learning methods are studied. A traditional state estimation approach based on weight least square minimization and a new neural network based approach are then presented. The knowledge of the neural network based state estimation approach is utilised in creating ideas about the voltage stability monitor and control system.

This study showed that neural networks can be used within power system applications. The state estimation results show that voltage magnitude and generators’ power injections are determined very accurately. On the other hand the voltage phase angles have larger errors. The convergence of the Levenberg-Marquardt training algorithm is also very good. Some ideas for further studies and general rules of them are created in the study. Practical experience of the neural network modelling is also very important.

IV

Intro d uctio n

1 INTRODUCTION

Artificial intelligence is the science of making machines to do things that would require intelligence if done by humans. The artificial intelligence is used to shed light on the human variety of intelligence by attempting to model it with computers, make computers easier to use by making them operate more like human users and solve complex problems that traditional programming methods can not solve efficiently or not at all.

Neural networks are used instead of more traditional computation methods when the system studied is complex or the actual mathematical function is unknown. Neural networks offer many advantages in these cases. A neural network is a black box which needs only an input vector and some parameters. The black box calculates the output.

Table 1.A Advantages and disadvantages of neural networks /16/.

Advantages

Disadvantages

1. Tolerates bad data

1. Solution is not always optimal

2. Easy to program

2. Convergence is sometimes slow

3. Massive parallel processing

3. Training data is hard to collect

4. Self-organisation

4. System dependent

5. Associative ‘memory’

5. Trial and error method needed to find parameters

6. Non-linear classification

Neural networks are used in power system applications where the number of combinatorial possibilities is too high, the problem has a statistical character or the model of the relevant part of the system cannot be easily identified. Over 350 papers have been published on the use of neural networks in the power system area /22/. Promising application areas are load forecasting, security assessment, fault di-

1

Intro d uctio n

agnosis and system identification. Figure 1.1 shows the shares of these applications /12/.

Operational planning 7%

Others 8%

Load forecasting 20 %

System identification 12 % Alarm processing 3%

Security assessment 19 %

Fault diagnosis 14 % Control 15 %

Protection 2%

Figure 1.A Neural network application areas in power systems.

Figure 1.2 represents the use of different neural networks in power system applications /12/. The perceptron neural network is the most common network structure.

2

Intro d uctio n

Others 12 %

Boltzman machine 2%

Fuzzy 7% Recurrent 3% Multilayer perceptron 56 %

Kohonen 7% Associative memories 1% Hopfield net 9% Functional link 3% Figure 1.B Neural networks used in power system applications.

This report is a preliminary study to ascertain the useability of the neural networks in real time static voltage stability monitor and control system. After an initial examination of different neural networks, I have chosen the preceptron neural network and Levenberg-Marquardt optimization for further studies. The power system state estimation was chosen as a test case. The application was chosen difficult enough to show the capabilities of the neural network and easy enough to form the test data with a computer program.

The second chapter describes the structure of the perceptron neural network and different training algorithms. The basic theory is represented to enable an understanding of all details in the following chapters. The third chapter presents the calculation of power system state estimation. The neural network approach to power system state estimation is presented in Chapter Four. Some further ideas for improving the application are also discussed. Chapter Five is the most interesting: risk management, voltage collapse, voltage stability analysis and voltage stability monitoring and control are discussed in this chapter. A new proposal for a real time voltage stability monitor and control system is also represented in this chapter.

3

Neur al netwo r ks

2 NEURAL NETWORKS

A neural network consists of interconnected elements called neurons or nodes. The network is a parallel distributed process which solves a desired computational task. Neural network functions are approximation and classification.

2.1 Structure of the multilayer perceptron network The perceptron network is a global network. This means that a node cannot be separated to react only to certain parts of the input vector. Mathematically speaking, the multilayer perceptron network is capable of approximating any mathematical function /8/.

2.1.1 One-layer perceptron The perceptron network is a non-linear mathematical model. It consists of nodes arranged in layers. Generally, all nodes in a layer are connected to all nodes in the adjacent layer through interconnecting weights. The weight matrix W and bias vector b determine the network output. The output of jth node is  n  a j = f j  ∑ w ji pi + b j   i =1 

(2.1)

where fj is the jth activation function, wji is the weight which jth node multiplies ith input vector element, pi is the ith input vector element and bj is the jth bias vector element.

2.1.2 Multilayer perceptron The one-layer network is seldom used. The practical neural networks consist of two or more layers. They are called multilayer perceptron networks. The first layer is called input layer, the last output layer and all layers between them hidden layers.

4

Neur al netwo r ks

The information fed into the input layer is propagated forward through hidden layers to the output layer. The purpose of the input layer is to connect each input node to each hidden layer node. All connections have a weight parameter.

b11 w111

p1

a11

w1

w1 21

w211 w 22

31

b12

p2

b21

1

a21 b22

a12 b13

p3 input layer

a22 a13

hidden layer

output layer

Figure 2.A Two-layer perceptron network.

The output of the kth output node is  m   n  a 2 k = f 2 k  ∑ w 2 kj f 1 j  ∑ w1 ji pi + b1 j  + b2 k   i =1   j =1 

(2.2)

where f2k is the kth output layer activation function, w2kj is the weight which kth output node multiplies jth hidden layer a1 element, f1j is the jth hidden layer activation function, w1ji is the weight which jth hidden node multiplies ith input layer p element, b2k is the kth output layer and b1j is the jth hidden layer bias vector element.

2.1.3 Function approximator Figure 2.2 represents the general idea of a function approximation. It can approximate arbitrarily well any function with a finite number of discontinuities when the number of hidden layer nodes is sufficient. Node connections are shown in matrix notation below each item. Input and output matrices should be understood as a set of column vectors (the number of column vectors is Q). S1 and S2 are the number

5

Neur al netwo r ks

of nodes at the corresponding layer. The hidden layer activation function is a hyperbolic tangent and the output layer activation function is a pure linear function.

p R×Q

w1 S1×R b1 S1×1

input layer

a1 S1×Q w2 S2×S1

Σ

b2 S2×1

hidden layer

a2 S2×Q

Σ output layer

Figure 2.B Function approximator /11/.

2.1.4 Optimal network structure To obtain a good generalisation capability, one has to build knowledge and limit the number of connections as much as possible /13/. Therefore it is important to find algorithms which optimise both weights and structure. The amount and quality of data has a great effect on the fit of the model. If the amount of data is large, it is possible to have a complex model with a large number of parameters. However, if the number of parameters is too large, the network starts to approximate the disturbances in the data and the model fit decreases. Good quality data should include different operating points and low noise values. One method to find an optimal network structure is to search through all possible structures. This might be time consuming, but possibly the only practical solution.

2.2 Training algorithms The neural network output will lead results similar to actual one, when it is properly trained. This is called generalisation. This makes it possible to train a network with a set of input/output pairs and to get good results without training with all possible pairs.

6

Neur al netwo r ks

The number of hidden layers and nodes has to be selected carefully after several tests. If the local error minimum is not satisfactory, the number of nodes or layers should be increased. Large networks are not accurate in all cases due to overfitting and convergence problems. Sometimes it is not possible to reduce the number of hidden layers and nodes due to accuracy, although the learning time is very long. In that case there are at least five possible solutions: • use numerically better training algorithm • divide the problem into several sub-problems • reduce the size of input vector • use pruned network structure instead of fully connected one • reformulate the problem

2.2.1 Back-propagation /8/ The network training is based on output error minimization. The simplest training algorithm is the back-propagation method. The error is propagated backwards and the network parameters are updated in the direction of negative gradient with respect to the weight parameters (steepest descent gradient method). Unfortunately, the convergence of back-propagation is very slow. It requires a very small learning rate for stable learning.

2.2.2 Momentum and adaptive learning rate /11/ Momentum and adaptive learning rate methods are used to improve the reliability and to decrease the learning time of the back-propagation method. The momentum decreases the method’s sensitivity to small details in the error surface. This helps to avoid a local minimum and find a lower error solution. The learning rate is kept as large as possible while keeping the learning stable. The learning rate is increased only if the network can learn without large error increase. When the learning rate is too high to guarantee error decrease, the rate is decreased until stable learning resumes.

7

Neur al netwo r ks

2.2.3 Optimization /11, 14, 15/ Quasi-Newton (DFP and BFGS), conjugate gradient and Levenberg-Marquardt optimization methods are more sophisticated than back-propagation. However, they require a large amount of computer memory. They are used to make learning even faster and more reliable than momentum and adaptive learning rate. The Conjugate gradient optimization method is used in very large problems where the object function is quadratic. If the problem size (=number of variables) is n, then the Conjugate gradient method converges at least in n steps. Quasi-Newton methods are used in highly non-linear cases.

The Levenberg-Marquardt optimization has become a standard method at the function approximation. The weight matrix update rule is ∆W = ( J T J + µI) J T E −1

(2.3)

where J is a Jacobean matrix of error derivatives with respect to the weight parameters, µ is a scalar and E is an error vector. If µ is very large, the above equation describes the steepest descent gradient method and while it is small the above equation describes the standard Gauss-Newton based method. The optimization is started with the steepest descent gradient method. The aim of the LevenbergMarquardt method is to shift towards Gauss-Newton as quickly as possible. The Gauss-Newton method is much faster near optimum than the steepest descent gradient method. The µ is automatically selected to guarantee good convergence.

8

State estimation

3 STATE ESTIMATION

Nowadays interconnected power systems have become more and more complex. To prevent the insecure operation (major system failures and regional power blackouts) electric network utilities have installed SCADA systems to support computer based systems at the control centre. Computer systems contain applications for service quality, economic operation, secure operation, system regulation and operational planning and scheduling. Before any security assessment or control action can be made, a system state must be estimated. On-line state estimation is concerned with computing solutions of the load flow problem every few minutes using on-line measurement data.

3.1 Power system state estimation State estimation is used to monitor voltage level, phase angel, line flow and network topology. Another benefit is to use it for short-term load forecasting. The state estimation calculates outputs similar to load flow, but the input data is quite different. The measured values are: • active and reactive power line flows • bus voltage at generation and load buses • active and reactive generation injection • active and reactive loads • transformer tap settings • breaker status.

Direct use of the raw measurements is not advisable and some form of data filtering is needed. The state estimation is also a statistical filter. Redundancy is necessary in a stochastic approach, because measurements are sometimes erroneous or unavailable. In this way the best fit of the input data is obtained rather than a separate proc9

State estimation

essing of individual measurements. Although the increase of accuracy is desirable, is the ability to filter out so-called bad data is more significant. Bad data comes from measuring instrument malfunctions or communication errors. The stochastic approach is also important in finding other factors of uncertainty like modelling inaccuracies. Modelling inaccuracies result from errors in the values of impedances and incorrect topology determination.

3.2 Least squares method The data collected from meters contains inaccuracies and they are unavoidable since measurements contains random error or noise. These errors can be quantified in a statistical sense and estimated values to measurements calculated. Estimated values are accepted or rejected if the accuracy is not exceeded. The true values are never known and we have to consider how to calculate the best estimates of unknown quantities. Usually the best estimates are chosen as those which minimise the sum squares error of measurements.

3.2.1 Error, measurement and estimate The actual error is not known. However, the statistical measure associated with the error can be determined. This measure can be estimated from the calibration curves of the measuring instruments. It is usually given in a standard deviation of the error (E(ei)=0 and E(ei2)=σ2).

The true but unknown measurement z is related to the true but unknown state vector x, the network admittance parameter vector p and the error e by the relation

( )

z = h x, p + e ,

δ  g x= , p=  b  V 

10

(3.1)

State estimation

The actual error is presented in Equation 3.2, where z is measured and ztrue≡Hx is the actual value. The network admittance parameter vector is assumed to be known exactly. e = z − z true = z − H x

(3.2)

The true values cannot be determined, but the estimates can be calculated. The estimated error is e$ = z − z$ = z − H x$ = e − H( x$ − x )

(3.3)

where hats indicate estimated values.

3.2.2 Calculation of the state variable estimate Because of the presence of redundant measurements, the solution is obtained by minimising the direct sum squares error. However, to ensure that measurements from meters of greater accuracy are treated more favourably than less accurate ones, the terms in the sum square error are multiplied by weight factor w. The weight factors are chosen as a reciprocal of the corresponding variance σj2, when the error is a gaussian random variable (the error is zero on the average and its standard deviation is σj). The object function to minimise is n

f = ∑ w j e 2j

(3.4)

j =1

The best state variable estimates are those which minimise the object function. The necessary solution conditions for the f minimum is that estimated state variables satisfy Equation 3.5.

∂ ej   n ∂f = 2 ∑ w j e j =0 ∂x ∂ x j = 1  

(3.5)

Equation 3.5 in matrix notation becomes

11

State estimation

∂ en   w  ∂ e1   e$1  1 L ∂ x   e$  ∂ x1  w2  1   2  = M O M    M  O  ∂ e1 L ∂ en     wn  e$n   ∂ xm  ∂ x m 144 3{ 42444 3 14442444 W e$ T H

 01  M   0 m 

(3.6)

Equation 3.6 takes the following form when Equation 3.1 is taken into account. The matrix Hx is the state estimation Jacobian matrix.

∂ hn   1  ∂ h1   ∂ x L ∂ x  σ 2   z1 − h1 ( x ( k ) )  1 1 1     O M   O M  =  M  h ∂ h ∂ k) 1 ( n 1     z n − hn ( x ) L 2  3 ∂ xm   ∂ x  144244 σ m n  144 42444 3 1442443 $e W HTx

 01  M    0 m 

(3.7)

To solve this equation for the state estimates we need an iterative solution method (Newton’s method), because the necessary conditions correspond to a set of nonlinear equations. First the function h(x) has to be linearised at the initial point x(0). h( x ) = h( x ( 0 ) ) + H (x0 ) ∆ x ( 0 )

(3.8)

∆ x = x (1) − x ( 0 )

The estimated state variables are solved from Equation 3.9. If matrix G is nonsingular, Equation 3.10 gives estimated state variables. If there is a lack of sufficient measurements, the G is not invertible.  z1 − h1 ( x ( 0 ) )  ∆x1( 0 )     ( 0 )T (0)  H (x0 )T W  M  = H x WH x  M   z n − hn ( x ( 0 ) ) ∆x n( 0 )     z1 − h1 ( x ( k ) )   x1( k +1)   x1( k )  −1       T T M − M = H WH H W M ( ) x x x       14243 (k )    x n( k +1)   x n( k )  G  z n − hn ( x ) 

12

(3.9)

(3.10)

State estimation

3.3 Bad data detection When the system model is correct there is good reason to accept state estimates calculated by the weight least square method. However, grossly erroneous or bad data should be detected and identified so that it can be removed from the state estimate. Bad data detection is based on the statistical analysis. The estimated measurement error variance can be detected from equation 3.11.

[ ]

$ $ T = W −1 − HG −1 H T = R' E ee

(3.11)

R’ is a measurement error covariance matrix, whose diagonal terms are the error variances.

The true measurement error is unknown, but it can be replaced in the object function by an estimate. The object function is itself a random variable which has an probability distribution area. In order to use those areas, we need to know the mean value of the object function. Equation 3.12 demonstrates that the expected value of object function is numerically equal to the number of degrees of freedom. It is also called the measurement redundancy scheme.  N m e$ 2j  N m R' jj $ E f = E ∑ 2  = ∑ 2 = N m − N s  j =1 σ j  j =1 σ j

[]

(3.12)

The weighted square sum of object function has the χ2 distribution. The χ2 distribution closely matches the standard gaussian distribution when degree of freedom is large (>30). In practical power systems the number of degree of freedom is large, which allows discarding measurements. However there is no guarantee that the largest standardised errors always indicate bad data measurements. The χ2 distribution provides a test for bad measurements detection.

1. Use raw measurements to determine the state estimates. 2. Calculate the estimated measurements values ( z$ = H x$ ) and the estimated measurement errors ( e$ j = z j − z$ j ).

13

State estimation

3. Evaluate the weighted square sum of object function. 4. For the appropriate number of degree of freedom k and a specified probability α, determine whether or not the value of the object function is less than the critical value corresponding to α. If the test ( f$ < χ

2 k ,α

) is true, then the measured data

and the state estimates are accepted. 5. If the test is not true, then there is reason to suspect the presence of at least one bad measurement. In that case omit the measurement corresponding to the larg-

(

)

est standardised error ( z j − z$ j / R' jj ) and re-evaluate the state estimates along with the square sum of the object function. If the new value satisfies the χ2 test, then the omitted measurement has been successfully identified.

14

State estimation

15

N e u r a l n e t wo r k s i n s t a t e e s t i ma t i o n

4 NEURAL NETWORKS IN STATE ESTIMATION This chapter presents a new approach to power system state estimation. Some papers have been published on the use of neural networks in the state estimation /9, 18/. In almost every published paper the perceptron network is trained by the backpropagation method. The main idea of this chapter is to present experiences of using the Levenberg-Marquardt training method in the power system state estimation. The test results are presented for two test networks.

4.1 State estimation The neural network approach is based on a multilayer perceptron network. To reduce the size of the output vector, the original neural network is divided into three sub-networks corresponding to voltage, power generation and voltage angle. The input vector consisting of active and reactive line flows for the different subnetworks remains the same. The number of hidden layers and nodes must be selected carefully due to possible convergence and overfitting problems. The number of hidden layer nodes is found by several tests. The training and test data is formed by the load flow program varying the load and generation. The percepton neural network approach was tested with 9 bus, 9 line (data3m9b) and 14 bus, 20 line (IEEE14) test networks. Table 4.1 presents the structure of proposed neural networks. Figure 4.1 presents an example of neural network structure. Table 4.A Number of nodes in input, hidden and output layers.

Test network

input

hidden (V) (PG,QG) (V) (δ) data3m9b 18 6 6 6 9 (*) IEEE14 40 6 4 6 14 (*) The number of input nodes in power generation is 26.

15

output (PG,QG) 6 7

(δ) 8 13

N e u r a l n e t wo r k s i n s t a t e e s t i ma t i o n

NINE BUS TEST NETWORK

Q89 Q78 Q67 Q56 Q49 Q45 Q36 Q28 Q14 P89 P78 P67 P56 P49 P45 P36 P28 P14

V9 V8 V7 V6 V5 V4 V3 V2 V1 input layer

hidden layer

output layer

Figure 4.A Example of neural network structure (the output is voltage).

Both test networks are taught by the Levenberg-Marquardt optimization. The method is implemented in Matlab at the Technical University of Denmark Institute of Automation /14/. This implementation is faster and uses computer memory more economically memory than Matlab’s own implementation. Besides, this toolbox is offered free of charge via Internet. The training was done with a PC (90 MHz , 32 Mb ROM). In some cases the computer memory became a limiting factor.

4.1.1 Results of 9 bus test network Table 4.2 presents the maximum, mean and mean square error between the actual and the neural network output. The maximum error is calculated as a mean value of each maximum error and correspondingly the mean error is calculated as a mean value of each mean error. Both measures are compared to actual mean value. The mean square error is calculated from Equation 4.1. MSE =

1 N ∑ε N i =1

2 i

(4.1)

N is the number of data and ε is the actual output minus neural network output. The errors are calculated for training and test data. The error in the training data case 16

N e u r a l n e t wo r k s i n s t a t e e s t i ma t i o n

shows how well the neural network can learn. The error in the test data case presents the network generalisation capability. Table 4.B Maximum, mean and mean square errors of voltage, power and angle.

Error type Data type max (%) training test mean (%) training test MSE training test

voltage 0.2 0.2 0.09 0.08 1.8*10-5 1.3*10-5

power 0.4 0.4 0.1 0.1 4.0*10-5 3.4*10-5

angle 3.4 3.4 1.4 1.3 0.3 0.3

Table 4.2 shows that the voltage amplitude and the power generation are determined correctly. The errors appear in the third or fourth decimal in both values. The voltage angle has a larger error. The voltage amplitude calculation lasts 10 minutes (7 iterations), the power generation calculation lasts 24 minutes (41 iterations) and the voltage angle training lasts 50 minutes (50 iterations). The number of training data was 1000 and the number of test data was 100, which were not included in the training data. The number of data is sufficient for the problem. Further data would not bring any significant improvement. The error between the actual and the neural network output is presented in Figure 4.2 for five cases. The error deviation between buses and the actual error can be seen in this figure. The character P in the power generation subfigure marks the active power and similarly character Q marks the reactive power.

17

N e u r a l n e t wo r k s i n s t a t e e s t i ma t i o n

-3

2

x 10

0 -2 1 0.02 E R 0.01 R 0 O R -0.01

Voltage amplitude [p.u.] 2

3

4

5

6

7

8

9

Power generation [p.u.]

1P 0.5

2P

3P

1Q

2Q

3Q

0

Voltage angle [degree] -0.5 2

3

4

5

6

7

8

9

BUS NUMBER

Figure 4.B Error between actual and neural network output.

4.1.2 Results of 14 bus test network Table 4.3 presents the errors for the IEEE14 test network. The voltage amplitude is determined correctly. The error appears in the second or third decimal. The error of power generation is smaller than the error of voltage, because the input vector has been chosen particularly for this case. The voltage angle has a greater error, because the error appears in the unity, but the percentual error is small enough. The number of training data was 2000 and the number of test data was 100. Table 4.C Maximum, mean and mean square errors of voltage, power and angle.

Error type Data type max (%) training test mean (%) training test MSE training test

voltage 1.8 1.8 0.6 0.6 1.0*10-3 9.8*10-4

power 1.2 1.1 0.4 0.3 1.7*10-4 1.2*10-4

angle 9.9 9.1 3.8 3.4 5.6 5.7

The error between the actual and neural network output is presented in Figure 4.3 for five cases. The error deviation between buses and the actual error can be seen in 18

N e u r a l n e t wo r k s i n s t a t e e s t i ma t i o n

this figure. The character P in the power generation subfigure marks the active power and while the character Q marks the reactive power.

0.02 0 -0.02 -0.04 E 0.01 R 0 R O R -0.01P1

Voltage amplitude [p.u.] 2

4

6

8

10

12

14

Power generation [p.u.]

P2

1

Q1

Q2

Q3

Q6

Q8

0 -1

Voltage angle [degree] 3

5

7

9

11

13

BUS NUMBER Figure 4.C Error between actual and neural network output.

In this case the 32 Mb ROM memory of the PC restricts the maximum number of voltage amplitude and angle training data to 700, which is not sufficient for the problem. The training becomes very slow when the PC starts to swap data to harddisk. A UNIX operating system has no such memory limits. As a consequence the training was done with the Alpha cluster of the university. The cluster consists of several DEC Alpha processors connected together through a fast ATM network. The training of the power generation was done by the PC. The reader should remember that these approaches are test cases not a complete set of computation tools. The errors of neural networks are only trendsetting. The aim of these approaches was to ascertain the capability of the neural networks and training algorithms. The capability was measured by the training time and required computer capacity. The reader should also remember that these neural networks can be further developed to reduce the error, training time and required computer ca-

19

N e u r a l n e t wo r k s i n s t a t e e s t i ma t i o n

pacity. For example, the power generation in IEEE14 case was determined much more accurately when some knowledge about the system studied was used during the selection of input nodes. The knowledge was used to reduce the number of input nodes.

4.2 Reduction of training time The former case showed clearly the importance of the reduction of input nodes. In most cases the long training time is due to an overlarge neural network. The size of the neural network can be reduced with input node and connection reduction. However, there are only few sophisticated calculation methods for achieving this.

4.2.1 Reduction of input nodes Reduction of input nodes would decrease the number of parameters. The reduction should be based on the knowledge from the identified system. This is very important, but also very difficult. One possible method is trial and error. In state estimation one possible solution for input node reduction is to reformulate the problem. Why should the state estimate be calculated for all buses at the same time? It should be possible to calculate voltage amplitude and angle for only single bus (Figure 4.4). The input would be line flows to and from that bus. In this case the maximum number of input nodes would be about 20 (5 lines) and the number of output nodes would be two. The input could also include earlier outputs via delay units.

20

N e u r a l n e t wo r k s i n s t a t e e s t i ma t i o n

P12 P13

V1

Q12 Q13

M

M

δ1

delay delay

Figure 4.D Proposal for reduction of input nodes.

4.2.2 Reduction of connections Because the perceptron network is a global network, it is impossible to see directly which connections are the most important ones. However, reduction of connections is very efficient way to decrease the training time. The Neural Network Based System Identification Toolbox offers Matlab functions for optimal brain damage (OBD) and surgeon (OBS). These functions apply OBD and OBS strategies for pruning the neural networks. The network leading to the smallest FPE (final predictor error) or test error is chosen for the new network. Unfortunately these methods are time-consuming, even for small networks. Figure 4.5 presents the OBS for 9 bus test network, when the connections of voltage amplitude neural network have been pruned. The connection reduction is started from 177 connections. The error is quite small until the error blow up around 70 connections.

21

N e u r a l n e t wo r k s i n s t a t e e s t i ma t i o n

+ = FPE, x = training error and o = test error -5

5

x 10

4

E R R O R

3

2

1

0

0

50

100

150

200

THE NUMBER OF CONNECTIONS

Figure 4.E Optimal brain surgeon for nine bus network.

Another way to reduce the number of connections is to use knowledge about input vector and system studied. For example, in state estimation it is relevant to assume that active and reactive line flows are not very strongly related to each other. Furthermore, the number of lines is always larger than buses in practical power systems. This leads to a situation where most connections are between the input and hidden layers. In this case the most effective way is to reduce connections between the input and the hidden layers. Figure 4.6 represents a proposal for the connection reduction. The number of connections between input and hidden layers are divided by two.

22

N e u r a l n e t wo r k s i n s t a t e e s t i ma t i o n

P12

V1

P13 P23

V2

Q12 Q13

V3

Q23

Figure 4.F Proposal for reduction of connections.

4.2.3 Other training methods Batch training methods The less memory-consuming Levenberg-Marquardt (marqlm) is also implemented to the Neural Network Based System Identification Toolbox. This would make it possible to enlarge the number of training data or connections. This method is, however, much slower than function marq used in this study. The Matlab Neural Network Toolbox and the Neural Network Based System Identification Toolbox include functions for the back-propagation method. These use computer memory much more efficiently than Levenberg-Marquardt, but the convergence is very poor. The usability of these methods was found to be very limited in state estimation approach. Recursive training methods Most often the disadvantages of recursive training methods are too overwhelming compared to batch methods (the Levenberg-Marquardt and back-propagation). The recursive method may be suitable for very large problems where lack of computer memory is a problem or with data sets a large degree of redundancy. The state estimation has both of these features, therefore this training method might be very useful.

23

N e u r a l n e t wo r k s i n s t a t e e s t i ma t i o n

The Neural Network Based System Identification Toolbox includes the recursive prediction error method (”recursive Gauss-Newton”). This Matlab function can use three different recursive methods: exponential forgetting, constant trace and the socalled exponential forgetting and resetting algorithm (EFRA).

4.3 Applicability to practical size power system Neural network training is very time consuming if the size of data and input vector are very large. The training time can be reduced by several methods described in former chapter. Also the network and the data partitioning should be used. The first idea is to divive the power system into several areas and form each of them own neural networks. In practical networks it is recommended to choose the neural networks equivalent to voltage control areas. This should be done, because they form functionally complete structures and any change in the area would have considerable effect on those buses. Another idea is to separate the normal connection and contingency states from each other in the neural network model. The normal state neural network corresponds to normal connection state (i.e. all lines connected). Contingency neural networks are needed when the network topology changes. The generators are not included in the topology determination, because the power generation is calculated in state estimation. The line disconnections are checked from breaker status information. The normal connection neural network is used if the breaker status information does not include a contingency. Conversely the contingency neural network is used, if a contingency is included. The whole system can be adjoined with a neural network classifier or a database. The network topology is included in a model bank. The classifier recognises the input vector structure and in most cases chooses the right model /19/. The database checks which model is equivalent to the input vector. The idea of both methods is described in Figure 4.7 /20/. The input is the breaker status information, the model corresponds to network topology and the output shows which neural networks are

24

N e u r a l n e t wo r k s i n s t a t e e s t i ma t i o n

needed in state estimation. The models should be arranged in some logical order to use an intelligent search algorithm /6/.

u

system

y

model 1

y1

model 2

y2

M model n

yn

Figure 4.G Model bank.

4.4 Conclusions According to this study the neural networks can be used to identify the power system state estimates. The practical size networks should be studied to ascertain the real usability of neural networks in state estimation. To do this the following observations should be taken into account. • The topology determination is very important, because the number of input

nodes should be kept small. • The reduction of training time should be studied. The most promising seems to

be the reduction of input nodes. • The number of hidden layer nodes should be selected carefully.

25

V o l t a g e s t a b i l i t y ma n a g e me n t

5 VOLTAGE STABILITY MANAGEMENT

Voltage stability management is basically a risk management problem. The system needs to be kept in safe and in an economic state. Risk management is the company’s strategy to minimize costs from risk occurence and prevention. The strategy can be based on reduction, elimination, avoidance, spread (assurance) and acceptance of risks.

5.1 Risk management The technical part of risk management is mostly realized by automatic and manual controls. Figure 5.1 represents the basic idea of the security control: in normal state all values are between their limits, in secure state controls minimize operation costs, in alert state they reschedule preventive actions, in emergency state they try to avoid unstability and in restoration state they restore unstable system back to secure state. In this case the voltage stability is the ability to operate in a stable manner and to remain stable during and after credible contingencies or load increases /3/.

RESTORATION

SECURE

System restoration

Optimization NORMAL

EMERGENCY

ALERT

Corrective control

Preventive control

Figure 5.1 Power system security control.

The risk management optimum is the minimum of all risk management costs in the whole life cycle of the system. The optimum is almost impossible to find in practice. The documentation from all security improvement actions and costs is required 26

V o l t a g e s t a b i l i t y ma n a g e me n t

to find out the cost trend and the effect of actions. Trend and effect should be analysed to optimise risk management. Other requirements to find the optimum are the knowledge of real risks, reasons, probabilities and consequences. Figure 5.2 shows the risk management optimum and its components.

sts co sum ence ur occ

risk management costs

e costs preventiv

optimum risk management actions

Figure 5.2 Risk management optimum.

5.2 Voltage collapse The purpose of a transmission network is transmit active power from generators to the loads as cheaply as possible within security limits. The voltage stability may be a limiting factor for the power system operation when the system is stressed /10, 21/. The system may be stressed by a minor (load increase) or major (line or generator tripping) disturbance. The voltage stability has become a major threat in many power utilities, because • power systems are operated closer to the security limits • generation is centralized in fewer and larger plants • use of shunt compensation is extensive • the transient angle stability limits are increased, which may result in voltage stability problems • new transmission lines and generators are difficult to build near load centers for environmental reasons

27

V o l t a g e s t a b i l i t y ma n a g e me n t

Voltage collapse results from voltage instability and is characterized by a monotonic decrease in voltages below acceptable limits. Sometimes voltage stability is called ”load stability”. Voltage collapse behavior is dependent on the time frame of the power system components (Figure 5.3).

Transient voltage stability

Long-term voltage stability

Induction motor dynamics

Capacitors/reactors

Load/power transfer increase

Generator/excitation dynamics

LTC transf. & dist. voltage reg.

Prime mover control

Load diversity/thermostat

Excitation limiting Undervoltage load shedding SVC

Gas turbine start-up

Powerplant operator Generation change/AGC

Generator inertial dynamics DC

Boiler dynamics

DC converter LTCs

Line/transf. overload System operator

Protective relaying including overload protection

0.1

1

10

time [s]

102

103

104

Figure 5.3 Voltage stability time responses [21].

Voltage collapse may happen suddenly (transient voltage stability) or within a longterm time frame (static voltage stability). Transient voltage stability is extremely difficult to decouple from angle stability. It is a question of which happens first and causes the other. Static voltage collapse can be decoupled from angle stability and has usually happened due to lack of reactive power. The reasons for the voltage collapse are various: •

power transmission is too heavy or load increase faster than expected



capacitors are connected, power production is increased or rectors are disconnected too late



reactive power reserves are too far away from critical load area, generators are at their reactive power limits or activation of reserves has not succeeded properly



tripping of line, generator or compensation device

28

V o l t a g e s t a b i l i t y ma n a g e me n t



behaviour of thermostatic and constant power loads



voltage dependent loads’ stabilizing effect is eliminated by transformers’ tap changers

5.2.1 Bifurcation point Every system has a critical point where the load flow Jacobian matrix becomes singular and the load flow solution does not exist. The maximum loadability point is a saddle node bifurcation point in the static case. The voltage collapse point is, however, dependent on the load characteristics. This is shown in the Figure 5.4.

V2 [kV] 450

operation point at the upper side

400 350

maximum loadability point

300 250

bifurcation point

200 150 100

long-term load characteristics

50 0

0

200

400

600

800

1000

P [MW]

Figure 5.4 Bifurcation point in the PV-curve.

In this case the voltage collapse point i.e. the bifurcation point is not the maximum loadability point. The operation point is determined by the load and the PV-curves. Normally the operation point is situated at the upper side of the PV-curve, but in some cases the operation point might be at the lower side. The bifurcation point is

29

V o l t a g e s t a b i l i t y ma n a g e me n t

the maximum point when the load characteristic is taken into account /2/. The system is not capable of maintaining acceptable voltage at the bifurcation point.

The equilibrium between the maximum loadability point and the lower side bifurcation point is stable if any control action is taken into account. However, the transformers tap changer operation for example is not correct at the lower side of PVcurve. When the tap changer increases the transmission voltage at the lower side of PV-curve by stepping down the tap, the load power would also increase. Transformers tap changers can operate in the wrong direction even at the upper side of the PV-curve, when they try to maintain distribution voltage at an acceptable level during a major disturbance.

5.3 Voltage stability analysis The power system should operate continously with a sufficient voltage stability margin. The margin can be determined between the operation and voltage collapse point. Probably the best determination for the margin is the pre-contingency security limit. The margin is computed to the most stressed pre-contingency situation such that the system would be stable after the contingency /3/. This is the most meaningfull and useful voltage stability index for an operator. This is, however, computationally demanding and contingency selection is needed.

The voltage stability indices are used to monitor the power system state and plan the power system investments. Table 5.1 contains six widely used static voltage stability indices and their advantages and disadvantages. They are described in more detail /1, 4/.

30

V o l t a g e s t a b i l i t y ma n a g e me n t

Table 5.1 Static voltage collapse indexes.

PLUS

MINUS

Post-contingency load flow • simple

• convergence problems

• fast

• no information about collapse

Continuation power flow • trajectory between operation and • requires calculation of many load collapse point flows • tangent vector • loadability limit Point of collapse method • exact voltage collapse point

• reliable only near the bifurcation point

• eigenvectors • loadability limit

Optimal reactive power margin • reactive power limits are easy to • does not handle active power implement • reliable only near the bifurcation • sensitivity vector point • reactive margin Modal analysis • indicator • control devices can be plemented

im-

• reliable only near the bifurcation point • huge amount of information

Minimum singular values • singular vectors

• index collapses suddenly

5.4 Voltage stability monitoring and control 5.4.1 Corrective control to avoid voltage collapse The corrective controls need system wide measurements and real time computation tools to avoid voltage collapse. The power system state has to be determined first. In the preventive mode system security determination is based on computation of contingencies and load increase. In emergency mode there is not time to compute contingencies. However, system-wide controls are needed to notice a voltage col-

31

V o l t a g e s t a b i l i t y ma n a g e me n t

lapse threat and select corrective control actions. Table 5.2 shows different actions to avoid voltage collapse /17/.

Table 5.2 Actions to avoid voltage collapse.

Preventive actions Emergency actions • expensive or fast units near load • OLTC return to pre-determined center are started up position • generation rescheduling

• OLTC blocking

• secondary voltage control

• load shedding

• shunt compensation switching

• quick rise in generator voltage

Exceptional actions • tariff signals to decrease the load

Development of operation • courses etc.

• transmission lines maintaining • simulator training to prevent voltoperation cancellation age collapse

5.4.2 A new real time voltage stability monitor and control system The monitor and control system should guide the power system operation by proposing preventive and emergency actions and informing the secure transmission capacity. Its aim is a security control; cost minimization is a secondary task. The system should indicate what to do to maintain the normal state. Figure 5.5 presents the idea.

emergency action decision making

min losses

system

preventive action proposals

problem estim.

state estim.

Figure 5.5 Voltage stability monitoring and control.

32

V o l t a g e s t a b i l i t y ma n a g e me n t

State estimation stage Voltage stability can not be estimated by voltage amplitude. A reliable indicator is needed to define the power system state. Many good indicators have been created for practical networks, but they are computationally heavy and the applicability is difficult /1/. Much theoretical and practical knowledge of voltage stability is also needed to apply indicators to power system operation. The neural network can be trained to approximate the system margin between operation and voltage collapse point. The neural network voltage stability state estimation approach and, for example, the point of collapse method together constitute a fast and reliable voltage stability state estimation tool (Figure 5.6). In this way the static state estimation would be fast enough to maintain long-term voltage stability in practical size networks.

cpu r equir ement

POC

POC ANN

ANN

ANN

ANN

ANN

t ime The point of collapse method - estimate the system state - determine the critical area

The artificial neural network - approximate the system state - determine the margin

Figure 5.6 Computing of state estimation stage.

Neural networks can be trained for the critical areas or with adaptive parameters training during the on-line process. The adaptive trained network would be slower than off-line trained network, but the accuracy would be better. The neural network approximates the system state with the margin calculated for the critical area. The input vector may consist of network measurements (line flows and generator power injections) and state estimation results (voltage) from the critical area. The output

33

V o l t a g e s t a b i l i t y ma n a g e me n t

could be a measure of security for example a number between 0 (=unstable) and 1 (=stable) or a transmission capacity in MWs. The delays are used to stabilize the output between sequential measurements.

network measurements and state estimation results

M

ANN

margin

delay delay Figure 5.7 Neural network for voltage security margin determination.

Problem estimation stage If the state estimation stage finds a possible voltage stability problem, then the process is continued in the problem estimation stage. The aim of this stage is to find a ”reason” for the problem. This information is needed in the proposal stage, where preventive or emergency actions are studied.

The reason for the problem can be found, for example, by model-based diagnostics (Figure 4.7). The neural network based diagnostic has been successfully used to pattern recognition tasks, because the neural networks can model non-linear systems. They are used as a classifiers to recognise the state or parameters of the system. The neural network is trained to calculate classes corresponding to the normal or known failure states or parameters. Parameter estimation is, however, very difficult in practice.

34

V o l t a g e s t a b i l i t y ma n a g e me n t

State estimation based diagnostic can be used in the power systems. The power system state is compared to models and the most suitable one is chosen to approximate the system. The model can be chosen, for example, by the Kohonen classifier /19/. The method is based on the analysis of models’ residuals. The chosen model includes information about the amount and location of the failure. The use of a model bank demands the presence of all possible models. It needs, however, the most essential models to work properly. The content of a model bank can be supplemented afterwards.

The problem estimation stage can also take advantage of indices calculated in the state estimation stage. The methods described in Chapter 5.3 calculate different ”sensitivity” vectors, which show the most critical area in the system. Measurements can also be used in problem estimation. For example, the generator reactive power injection can be used to check the reactive power reserves in different contingencies or directly to show how much reactive power support is left in the current situation.

Proposal stage The aim of the proposal stage is to choose the actions to maintain or restore the system in secure state. This is the most difficult part of the decision process. The aim of this stage is to eliminate unnecessary state improvement options. During the threat of voltage collapse operator does not have time to check long lists, because the voltage collapse behavior is very complex. The computer should decide which improvement options are the most promising ones.

The problem is in the huge amount of information. It is not possible to calculate the consequence of all possible options. In this stage heuristics is needed. The neural network classifier, fuzzy reasoning, decision tree, optimization, object oriented programming or expert system can compare different options. They are place in order based on information produced in the state and problem estimation stages.

35

V o l t a g e s t a b i l i t y ma n a g e me n t

Decision making The state estimation, problem estimation and proposal stages are tools which the operator needs to filter information. With these tools the operator is can manage the information and choose the best actions to prevent voltage collapse. The decision support system recommends some choices from which the operator can choose. The decision-making is divided into three parts: loss minimization, preventive actions and emergency actions. The decision making is devided into three part: 1. Transmission cost minimization (without losing voltage stability) • minimization of transmission losses • maximization of stability margin 2. Preventive actions • reactive power compensation • generator starting (expensive or fast units near critical load) • optimization of reserves 3. Emergency actions • transformer tap changer blocking • generator voltage setting • network partitioning • load shedding

Decision support system The state estimation, problem estimation and proposal stages together are called the voltage stability decision support system. The decision support system selects the essential information for the decision making. All the information should help the operator’s work, because voltage collapse is a very rare incident. The decision support system outputs must be explicit, illustrative, reliable and simple to read and understand. Figure 5.8 presents the decision support system and possible ways to implement it.

36

V o l t a g e s t a b i l i t y ma n a g e me n t

State estimation

• classified models • eigenvectors • reactive power reserves

Problem estimation

• decision making criterion choice • superiority order

Proposal

• operator’s experience

Decision

• steady state load flow • ANN margin

Check

• operator’s risk

Actions

Figure 5.8 Decision

• ANN margin • PoC, etc.

• neural network • optimization • decision tree • fuzzy logic • C++ • hypertext • expert system

support system.

The decision support system can be used in on-line and off-line mode. In on-line mode the system state is monitored by the above description. The off-line mode is used to help on-line decision making, training or system and operation planning. • The state estimation can be used to calculate the power system state after an in-

vestment or an operation. The decision-making can be helped by calculating the system states after various contingencies. • The problem estimation can be used to choose the optimal place for the invest-

ment according to voltage stability. The critical area determination can be used in reactive power reserves and load shedding planning.

37

Co nclusio ns

6 CONCLUSIONS

This report is a preliminary survey of neural network based modelling in power systems. The purpose of this report is to ascertain the capability of neural networks for power system approaches. The requirements of neural network modelling have also been studied.

The neural network based state estimation approach was studied as a test case in this report. The capability of multilayer perceptron neural network was found to be very good. According to this result neural network based modelling is suitable for power system approaches. Neural network based modelling will be studied in the voltage stability monitor and control system in the future.

The requirements of neural network modelling the large amount and good quality of data, large computation capacity, knowledge about neural network modelling and a suitable software package. The data, computer and software requirements are not limiting factors, because the data can be measured or generated and the required computation capacity and software package can be bought. The knowledge is probably the most limiting factor in the use of neural networks in practical size power systems. The size of input vector has to be reduced to decrease the learning time. The requirement of reduction is the knowledge about the system studied and neural networks.

Some practical results of neural network modelling were also found. The most interesting neural network types are multilayer perceptron, radial basis and Kohonen network. In the choice of training algorithm, the memory requirement and convergence properties have to be taken into account. As mentioned earlier the size of input vector is related to the learning speed. To reduce the learning time the size of input vector or interconnecting weights should be reduced. The optimal network

38

Co nclusio ns

structure (the number of hidden layers and nodes) is also closely related to the learning time.

The benefit of neural network based modelling instead of traditional mathematical computation is the extremely fast computation. The time-consuming learning can be done in off-line mode, when the on-line computation of neural network output requires only a couple of matrix vector multiplications. Another benefit of neural networks is a n-dimensional non-linear function approximation or classification. The traditional non-linear function approximation tools are more related to two or three dimensional cases.

The neural networks are a very promising area of mathematical modelling in the field of electrical power systems. The proposed state estimation approach proves the above statement. The proposed approach is accurate and fast. The clearest benefits of neural network modelling are in time critical systems. The applicability of the neural networks is very promising in the voltage stability monitor and control system.

39

B ib lio gr ap hy

BIBLIOGRAPHY 1. Bastman, Juhani and Dahlman Janne. Sähkönsiirtoverkon jännitestabiilisuuslaskentamallien toteutus ja testaus. . Tampere University of Technology, report 3-95, 1995, 72 pp. 2. Cañizares, Claudio A. On bifurcations, voltage collapse and load modelling. IEEE Trans. on Power Systems, Vol. 10, No. 1, 1995, 512-522 pp. 3. Van Cutsem, Thierry. Voltage stability and security analysis. EES-UETP course, Helsinki, March 7-8, 1996. 4. Dahlman, Janne. Sähkönsiirtoverkon jännitestabiilisuuden laskentamenetelmät. Tampere University of Technology, M.Sc. thesis, 1995, 87 pp. 5. Debs, Atif S. Modern power systems control and operation: a study of real-time operation of power utility control centers. Georgia Institute of Technology, 1991, 361 pp. 6. Dougherty, Edward R. and Giardina, Charles R. Mathematical methods for artificial intelligence and autonomous systems. Prentice-Hall, 1988, 446 pp. 7. Grainger, John J. and Stevenson, William D. Power system analysis. McGrawHill, 1994, 787 pp. 8. Haykin, Simon. Neural networks: a comprehensive foundation. New York, MacMillan Collage Publishing Company, 1994. 9. Kumar, D. M. Vinod et al. Topology processing and static state estimation using artificial neural networks. IEE Proc. of Generation, Transmission and Distribution, Vol. 143, No.1, January 1996, 99-105 pp. 10. Kundur, Prabha. Power system stability and control. McGraw-Hill, EPRI, 1994, 1176 pp. 11. The Math Works Inc. Neural network toolbox user’s guide. Natick, Massachusetts, MATLAB, 1995. 12. Niebur, D. Artificial neural networks for power systems. CIGRE Task Force 38.06.06, April 1995, 77-101 pp. 40

B ib lio gr ap hy

13. Nissinen, Ari. Structural optimization of feedforward neural networks using genetic algorithm. Tampere University of Technology, report 5/94, 80 pp. 14. Nørgaard, Magnus. Neural network based system identification toolbox. Technical University of Denmark, Technical report 95-E-773, 1995, 82 pp. 15. Press, Flannery, Teukolsky and Vetterling. Numerical recipes, the art of scientific computing. Cambridge, 1986, 818 pp. 16. Raiskila, Pertti. Monikerroksisella perceptronilla toteutettu neuraalisäätäjä. Tampere University of Technology, report 4/89, 73 pp. 17. Repo, Sami. Sähkönsiirtoverkon jännitestabiilisuuden arviointi ja parantaminen. Tampere University of Technology, M.Sc. thesis, 1995, 80 pp. 18. Salehfar, H. and Zhao, R. A neural network preestimation filter for bad-data detection and identification in power system state estimation. Electrical power system research, 34, 1995 127-134 pp. 19. Sorsa, Timo. Neural network approach to fault diagnosis. Doctoral thesis, Tampere University of Technology, Publications 153, Tampere 1995, 65 pp. 20. Suontausta, Janne. Dynaamisten järjestelmien vikadiagnostiikka neuraaliverkolla. Tampere University of Technology, report 5/92, 84 pp. 21. Taylor, C. W. Power system voltage stability. McGraw-Hill, EPRI, 1994, 270 pp. 22. Vankayala, Vidya Sagar S. and Rao, Nutakki D. Artificial neural networks and their applications to power systems — a bibliographical survey. Electrical power system research, 28, 1993, 67-79 pp.

41

Ap p e nd ix

APPENDIX

Here are listed three m-files, which are the most important in neural network training and evaluation.

Training % load test network and choose working directory data3m9b; tiedosto=['data1']; %ieee14; tiedosto=['data2' ]; kysy=input('Do you want to create a new training data-matrix y/n ','s'); if kysy=='y' datamat(bus,line,val);end; % initials of weights and biases and load input and output data [w1,w2,w3,w4,w5,w6,P,T1,T2,T3]=lataus(tiedosto); % options of Levenberg Marqueradt max_iter=20; % max iterations stop_crit=1e-04; % stopping criteria lambda=1; % initial weight decay D=0; % row vector containing the weight decay parameters, first element to hidden-to-output and second element to input-to-hidden weights trparms=[max_iter stop_crit lambda D]; % training of voltage if val==1 netdef=['HHHHHH---';'LLLLLLLLL'];end if val==2 netdef=['HHHHHH--------';'LLLLLLLLLLLLLL'];end [w1,w2,NSSE,iter,lambda]=marq(netdef,w1,w2,P,T1,trparms); % training of power injection if val==1 netdef=['HHHHHH';'LLLLLL'];end if val==2 netdef=['HHHHHH----';'LLLLLLLLLL'];end [w3,w4,NSSE1,iter1,lambda1]=marq(netdef,w3,w4,P,T2,trparms); % training of angle if val==1 netdef=['HHHHHH--';'LLLLLLLL'];end if val==2 netdef=['HHHHHH-------';'LLLLLLLLLLLLL'];end [w5,w6,NSSE2,iter2,lambda2]=marq(netdef,w5,w6,P,T3,trparms); % save weight-matrixes and bias-vectors talletus(w1,w2,w3,w4,w5,w6,tiedosto);

Data creation function [P,T1,T2,T3]=satluold(kpl,bus,line,val); apu=bus; iter_max=20; luku=0; kk=find(bus(:,6)~=0);

% original bus-matrix % loadflow max iterations % amount of accepted data vectors % network has kk load buses

42

Ap p e nd ix k=size(kk,1); tt=find(bus(:,4)~=0); t=size(tt,1);

% number of coeffecient-vector for loads % network has tt generation buses % number of coeffecient-vector for generators

% until data-matrix contains kpl vectors while luku

Suggest Documents