Chapter 2 Earthquake Prediction Based on LM-BP Neural Network

Chapter 2 Earthquake Prediction Based on LM-BP Neural Network Feiyan Zhou and Xiaofeng Zhu Abstract Earthquake prediction is an important task of ge...
Author: Millicent Pope
3 downloads 0 Views 325KB Size
Chapter 2

Earthquake Prediction Based on LM-BP Neural Network Feiyan Zhou and Xiaofeng Zhu

Abstract Earthquake prediction is an important task of geography problem study and a worldwide difficult problem of science. It is difficult for people to establish physics theoretical model because there are many relative factors arising earthquake. Considering the strong fault-tolerant ability and fast velocity of prediction of the neural network, the neural network is applied to earthquake prediction and can obtain good prediction effect. Because the predisposing factors of earthquake are various and earthquake magnitude prediction is difficult, the LM-BP algorithm is applied to the earthquake prediction. The simulation results show that the proposed method is an effective tool for earthquake prediction. And it has fast convergence and high precision of prediction for earthquake prediction. It also provides a good method for earthquake prediction. Keywords BP neural network

 LM algorithm  Earthquake prediction

2.1 Introduction Earthquake prediction is an important task of geography problem study. So accurate earthquake prediction can help people to take timely and effective measures and reduce casualties and economic loss. There are many relative factors that arise earthquake. The complexity originating from earthquake, the nonlinearity of the preparation process and the difficulty of understanding problems of earthquake make it very difficult to establish a more perfect theoretical physics F. Zhou (&)  X. Zhu Beijing Institute of Graphic Communication, Xinghua street 25, 102600 Beijing, China e-mail: [email protected] X. Zhu e-mail: [email protected]

X. Liu and Y. Ye (eds.), Proceedings of the 9th International Symposium on Linear Drives for Industry Applications, Volume 1, Lecture Notes in Electrical Engineering 270, DOI: 10.1007/978-3-642-40618-8_2, Ó Springer-Verlag Berlin Heidelberg 2014

13

14

F. Zhou and X. Zhu

model. To be precise in description of the relevant physical parameters, it can only rely on some of the observations on the related phenomena to analyze, summarize, and reason [1]. Earthquake prediction is a highly nonlinear problem, and it is to use the scientific method to make preliminary estimates of time, location, and intensity of future earthquake. At present, domestic and foreign seismologists use mainly the following three categories in earthquake prediction methods: (1) nondeterministic mathematical method, (2) method of physics, and (3) new development systematic science method [2]. Artificial neural network system is a highly adaptive nonlinear, dynamic system. It can extract the causal relationship which is implicit in the samples by learning a large number of samples and analyzing the more complicated and nonlinear system. So, it is very effective to apply the neural network to the earthquake prediction. And the BP neural network is a widely used network form in the prediction system. However, BP algorithm which is based on the principle of gradient descent has characteristics with the existence of local minimum, slow convergence, etc. This article uses LM-BP algorithm for earthquake prediction. The results of simulation experiments show that this algorithm has the fast convergence speed, high precision, and good prediction effect.

2.2 BP Neural Network Base on LM Algorithm BP neural network is also called the error back-propagation neural network. It is a multilayer feedforward neural network which uses nonlinear differential function to train weights. A typical BP network includes input layer, hidden layer, and output layer. Each layer of the BP network is made up of one or more neurons. There is no correlation between the neurons in the same layer, but there are forward connections among the neurons in the different layers. In the BP neural network, the propagation of signal is forward, while that of the error is backward. The so-called back propagation is that, if the outputs of the output layer are not expected, then the error signal will return along the original connected path. Network modifies the connection weights of each layer according to the signal of back propagation, so that the error signal achieves the required accuracy. The BP neural network usually uses BP algorithm. But, when the standard gradient descent algorithm and gradient descent with momentum are applied to the practical problems, there are often defects with too slow learning rate. Moreover, it is very easy to fall into the partial minimum point. Thus, people put forward many kinds of improved and high-efficient BP algorithms. These fast algorithms mainly may divide into two kinds. One is the heuristic learning algorithm, including gradient descent method of variable learning rate, gradient descent method for momentum and adaptive learning rate, elastic BP training method, and so on. Another is training algorithm based on the most optimization theory, including conjugate gradient algorithm, quasi-Newton method, Levenberg–Marquardt (LM) [3] algorithm, and so on.

2 Earthquake Prediction Based on LM-BP Neural Network

15

LM algorithm is a nonlinear optimization method between Newton’s method and gradient descent method. For the overly parameterized problems, LM algorithm is insensitive and can deal with the redundant parameter problem effectively, so that the opportunity of the objective function into a local minimum greatly reduces [4]. The key to the LM algorithm is that using a model function to do linear approximation with unknown parameter vectors in its field, ignoring the derivative whose order is two or more. Thus, the LM algorithm transforms into a linear least squares problem and can greatly reduce the amount of calculation. LM algorithm requires parameters to be estimated for each partial derivative. This paper applies neural network based on the LM algorithm for earthquake prediction. The specific algorithm of LM is as follows [5, 6]. Let M be the total number of layers in BP neural network. There are P groups training samples ðXk ; Tk Þ, k ¼ 1; 2; . . .; P. Xk ¼ ðx1k ; x2k ; . . .; xNk Þ is input vector of the kth training sample. N is the number of input layer neurons. Let Sm be the number of neurons in the mth layer, where m ¼ 1; . . .; M. Yk ¼ ðy1k ; y2k ; . . .; ySM k Þ is actual output vector of the kth training sample. Tk ¼ ðt1k ; t2k ; . . .; tmk Þ is expected output vector. Number of input layer neurons is recorded as S0 . wm is a matrix which consists of all weights in the mth layer. bm is a column vector which is composed of biased value in the mth layer. Their component, respectively, is   m ð2:1Þ wm ¼ wm 1 ; . . .; wSm1 m m T m1 where wm , and i ¼ ðw1i ; . . .; wSm i Þ ; i ¼ 1; . . .; S   m m T bm ¼ bm 1 ; b2 ; . . .; bSm :

ð2:2Þ

In order to avoid the tendency of studying the new sample and forgetting old sample in standard BP algorithm, here we use group training. The purpose of training in BP algorithm is to hope to get the minimum error sum of squares between the desired output vectors of training samples with actual output vectors of the network. The function of error sum of squares is the objective function that is to be optimized: ET E ¼

P X

T

ðTk  Yk Þ ðTk  Yk Þ ¼

k¼1

P X

EkT Ek ¼

k¼1

P X SM  X

2 Ejk ;

ð2:3Þ

k¼1 j¼1

Ejk ¼ tjk  yjk ; ET ¼ ½E11 ; E21 ;    ; ESM 1 ; E21 ; E22 ;    ; ESM 2 ;    ; E1P ;    ; ESM P 

ð2:4Þ ð2:5Þ

And weight vector W is W T ¼ ½w1 ; w2 ;    ; wM ;

ð2:6Þ

16

F. Zhou and X. Zhu

in which  wi ¼ wi11 ; wi12 ;    ; wi1SI1 ; wi21 ;    ; wi2Si1 ;    ; wiSi 1 ;  wiSi 2 ;    ; wiSi Si1 ; bi1 ; bi2 ;    ; biSi :

ð2:7Þ

and  0 T 0 0 0 JðWÞ ¼ E11 ; E21 ;    ; Es0 M 1 ; E12 ;    ; Es0 M 2 ;    ; E1P ;    ; Es0 M P :   oEij oEij oEij ; ;    ; Eij0 ¼ ; i ¼ 1;    ; SM ; j ¼ 1;    ; P: ow1 ow2 owM

ð2:8Þ ð2:9Þ

The iterative formula of LM algorithm is as follows  1 wnþ1 ¼ wn  J ðwn ÞT J ðwn Þ þ ln I J ðwn ÞT Eðwn Þ;

ð2:10Þ

in which wnþ1 represents the network weight of the n þ 1th iteration. wn is the network weight of the nth iteration. I is a unit matrix. ln is a adjustment factor for some ln  0. If ln is very small and draws close in 0 o’clock, the LM algorithm becomes quasi-Newton method of approximate Hessian matrix; if ln is very large, the algorithm becomes the gradient descent method with small length of stride. Since Newton’s method can usually confirm faster and more accurate convergence near in the error minima point, the purpose of the algorithm is to be converted to Newton’s method as soon as possible. After the LM neural network firstly initializes the network, it will read the input data into neural network for training. If the network performance achieves the error size which has been set after the first training, then the network stops training; otherwise, the network carries on the iterative computation with (10) constantly and updates weights and biased value until it reaches the error size that has been set.

2.3 Analysis of Experimental Result As in [1], it provides some seismic data which are used as sample source in southwest areas of China where earthquakes are common to verify LM-BP algorithm’s role in earthquake prediction Based on these provided seismic data, we extract seven predictors and the actual magnitude, respectively, as input and target vectors. The predictor respectively is the cumulative frequency of earthquakes whose actual magnitude is greater than or equal to three within half a year accumulated value of energy release within 6 months, b-value, number of unusual earthquake swarms, number of seismic stripes, active period, and magnitude of relevant seismic area. This paper has collected a total of 17 groups of samples. These samples are as shown in Table 2.1 after normalization treatment. The first

2 Earthquake Prediction Based on LM-BP Neural Network

17

Table 2.1 Seventeen groups of learning earthquake cases Cumulative Energy b-value Number Number Active frequency release of swarms of stripes period

Magnitude of relevant area

Actual magnitude

0 0.3915 0.2835 0.6210 0.4185 0.2160 0.9990 0.5805 0.0810 0.3915 0.0270 0.1755 0.4320 0.4995 0.6885 0.5400 0.1620

0 0.3158 0.3158 1.0000 0.7368 0.2632 0.9474 0.3684 0.0526 0.8974 0.2105 0.7368 0.2632 0.6842 0.4211 0.5789 0.4737

0 0.5313 0.5938 0.9375 0.4375 0.5000 1.0000 0.3750 0.3125 0.6563 0.1875 0.4062 0.4375 0.5938 0.6250 0.7187 0.3750

0 0.4741 0.5402 1.0000 0.4183 0.4948 0.0383 0.4925 0.0692 0.1230 0.0742 0.3667 0.3790 0.4347 0.5842 0.0838 0.2565

0.62 0.77 0.68 0.63 0.67 0.71 0.75 0.71 0.76 0.98 0.62 0.77 0.68 0.63 0.67 0.71 0.75

0 0.5 0 1 0.5 0 0.5 0 0 0.5 0 0 0.5 0 0.5 0.5 0

0 0.5 0.5 0.5 0 0 1 0 0 0 0 0.5 0 0 0.5 0.5 0

0 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1

ten groups of data are training samples of network, and the last seven groups are predictive samples which are used to detect the performance of network. The number of neurons of input and output in BP neural network can be determined by the way that the problem to be solved and the data are represented. According to the provided seven predictors and the magnitude predicted for the target in this chapter set is the number of neurons of input vector as seven and the number of neurons of output vector as one in BP neural network with single hidden layer. It is difficult to determine the number of neurons of the intermediate layer, and there is no universal formula until now. Firstly, make the number of neurons of the intermediate layer to increase progressively in turn from 3 to 20 in this paper. Use the for cyclic sentence to train network whose number of the intermediate layer is confirmed each time. Then, we select the number of neurons of the hidden layer which makes the error minimize. And then initialize the network at random and train the network again. In accordance with the general design principles of BP network, the transfer function of middle layer can be set to the S-tangent function. Since the outputs have been normalized to the interval [0,1], the transfer function of output layer can be set to S-logarithmic function. After training, we need to use another set of data to test the network, obtaining the output of the network by using the simulation function. And then check whether the error between the output and the actual measured values meet the requirement. For the same samples, compare the LM-BP algorithm with the standard BP algorithm. Training times are set to 5,000 and training goal is set to 0.001 in this chapter. Fifteen is the optimal number of nodes of hidden layer, when using the

18 Fig. 2.1 LM-BP data normalized error curve

F. Zhou and X. Zhu 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0 -0.02

Fig. 2.2 BP data normalized error curve

1

2

3

4

5

6

7

1

2

3

4

5

6

7

0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 -0.05

standard BP algorithm, while 16 is the optimal number of nodes of hidden layer, when using the LM-BP algorithm. After training, the results are shown in several diagrams, Tables (Figs. 2.1, 2.2, 2.3, 2.4 and Table 2.2).

2 Earthquake Prediction Based on LM-BP Neural Network Fig. 2.3 The training result of LM-BP 10

19

Performance is 0.000613787, Goal is 0.001

0

Training-Blue Goal-Black

-1

10

-2

10

-3

10

-4

10

0

1

2

3

4

5

6

7

7 Epochs

Fig. 2.4 The training result of BP

Training-Blue Goal-Black

100

Performance is 0.04159, Goal is 0.001

-1

10

-2

10

-3

10

-4

10

0

200

400

600

800

1000

1000 Epochs

Table 2.2 Forecast comparison of LM-BP algorithm with the traditional BP

Sample groups

Actual magnitude

BP

LM-BP

1 2 3 4 5 6 7

4.4 5.1 5.2 5.7 5.8 6.1 5.0

4.2249 5.5672 6.6404 6.2090 6.6728 6.5243 5.6208

4.3285 5.5459 5.5130 5.9522 6.2867 6.6417 5.1124

20

F. Zhou and X. Zhu

2.4 Conclusion This paper uses LM-BP neural network to effectively solve these solutions that exist in the standard BP algorithm whose convergence speed is slow and local minimum is easy to get. Experimental results show that convergence speed of LM algorithm is fast and it has good predictive effect and high accuracy. Acknowledgments Institute Level Key Projects Funded by Beijing Institute of Graphic Communication (Ea201231); Funding Project for Academic Human Resources Development in Institutions of Higher Learning Under the Jurisdiction of Beijing Municipality (PHR201107145); Scientific Research Common Program of Beijing Municipal Commission of Education of China (KM201210015011).

References 1. Fei Sike Technology R&D Center compiled (2005) Neural network theory and MATLAB7 implementation. Electronic Industry Press, Beijing, p 271 2. Han X-F, Pan C-Y, Luo C-J (2012) Application of generalized regression neural network based on genetic algorithm in earthquake predication. North China Earthq Sci 30(1):48–49 3. Wang Y-S, Chu F-L, He Y-Y et al (2004) New and rapidly converging BP neural network algorithm. J Agr Mach 35(6):1–2 4. Zhang H-Y, Geng Z (2009) Novel interpretation for Levenberg-Marquardt algorithm. Comput Eng Appl 45(19):5–6 5. Zeng F-X, Li Q-Y (2008) Application of BP neural network-based LM algorithm to dam monitoring data processing. Hydropower Automation Dam Monit 32(5):72–74 6. Kishore umar R, Sandesh S, Shankar K (2007) Parametric identification of nonlinear dynamic systems using combined Levenberg–Marquardt and genetic algorithm. Int J Struct Stab Dyn 7(4):719

http://www.springer.com/978-3-642-40617-1

Suggest Documents