Finite-Time Stability of Fractional-Order Neural Networks with Delay

Commun. Theor. Phys. 60 (2013) 189–193 Vol. 60, No. 2, August 15, 2013 Finite-Time Stability of Fractional-Order Neural Networks with Delay∗ Ç,‡), ...
Author: Amanda Hill
2 downloads 0 Views 245KB Size
Commun. Theor. Phys. 60 (2013) 189–193

Vol. 60, No. 2, August 15, 2013

Finite-Time Stability of Fractional-Order Neural Networks with Delay∗

Ç,‡),

WU Ran-Chao (

1,†

ç À),

HEI Xin-Dong (

1

á²)

and CHEN Li-Ping (

1

School of Mathematics, Anhui University, Hefei 230039, China

2

School of Automation, Chongqing University, Chongqing 400044, China

2

(Received December 12, 2012; revised manuscript received February 18, 2013)

Abstract Finite-time stability of a class of fractional-order neural networks is investigated in this paper. By Laplace transform, the generalized Gronwall inequality and estimates of Mittag–Leffler functions, sufficient conditions are presented to ensure the finite-time stability of such neural models with the Caputo fractional derivatives. Furthermore, results about asymptotical stability of fractional-order neural models are also obtained. PACS numbers: 05.04.Jn, 05.04.Vx, 05.04.Xt

Key words: neural networks, fractional-order, finite-time stability, Gronwall inequality

1 Introduction The fractional calculus as a generalization of the traditional integer order differentiation and integral could be traced back to the correspondence between Leibniz and L’Hospital in 1695. Due to lack of application background and its complexity, it did not attract much attention for a long time. Recently, it has been proved to be valuable tools in modeling many phenomena in various fields of engineering, physics and economics, such as dielectric polarization, electromagnetic waves, viscoelastic systems, heat conduction, biology, finance and so on.[1−3] Now it has gained increasing interests from researchers in various areas and has become one of the central subjects. Since the fractional-order derivative is nonlocal and has weakly singular kernels, it provides an excellent instrument for the description of memory and hereditary properties of dynamical processes. Nowadays, fractional calculus has been incorporated into artificial neural networks, mainly because it has the infinite memory. In Ref. [4], it has been pointed out that fractional derivative provides neurons with a fundamental and general computation ability that can contribute to efficient information processing, stimulus anticipation and frequency-independent phase shifts of oscillatory neuronal firing. In Ref. [5], it was suggested that the oculomotor integrator, which converts eye velocity into eye position commands, may be of fractional order. It was demonstrated that neural network approximation taken at the fractional level resulted in higher rates of approximation.[6] Also it was noted that fractional-order recurrent neural networks might be expected to play an

important role in parameter estimation. Therefore, the incorporation of memory terms (a fractional derivative or integral operator) into neural network models is an important improvement and it is necessary to study fractionalorder neural networks.[7] As we know, practical applications heavily depend on the dynamical behaviors, such as Lyapunov stability and asymptotic stability, of neural models. So the stability of neural networks has become one of the most active areas of research, and has been widely investigated. There have been many stability results about integer-order neural networks in the past few decades, see Refs. [8–10] and references therein. Recently, some important and interesting results about stability of fractional-order artificial neural networks have been obtained. For instance, stability and multi-stability, bifurcations and chaos of fractionalorder neural networks of Hopfield type were investigated in Ref. [7]. Chaotic behaviors in noninteger-order cellular neural networks[11] were discussed. In Ref. [12], a fractional-order Hopfield neural model was proposed and its stability was investigated by means of energylike function. A fractional-order four-cell cellular neural network[13] was proposed and its complex dynamical behaviors were investigated by means of numerical simulations. Yu et al. investigated α-stability and αsynchronization for fractional-order neural networks.[14] Neural networks are said to be finite-time stable, if the states do not exceed some bounds within a prescribed fixed time-intervals when the initial states satisfy a specified bound. Classical Lyapunov stability concepts require that the systems operate over an infinite time interval,

∗ Supported by the Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant No. 20093401120001, the Natural Science Foundation of Anhui Province under Grant No. 11040606M12 and the Natural Science Foundation of Anhui Education Bureau under Grant No. KJ2010A035, and the 211 Project of Anhui University under Grant No. KJJQ1102 † Corresponding author, E-mail: [email protected] c 2013 Chinese Physical Society and IOP Publishing Ltd

http://www.iop.org/EJ/journal/ctp http://ctp.itp.ac.cn

190

Communications in Theoretical Physics

while all real neural systems operate over finite time intervals. Furthermore, the classical Lyapunov stability is mainly concerned with the asymptotical behavior and seldom concerned with specified bounds on the states. In fact, in many applications, it is necessary to maintain the states within some bounds during a specific time-interval. In these cases, one cares more about the finite-time behavior of the system than the asymptotic behavior. So finite-time stability was put forward,[15−16] which dealt with systems with prescribed bounds and finite time intervals. Since then, finite time stability was intensively investigated, see, for example, Refs. [17–19]. Investigation of finite time stability was also carried out on fractional order systems, see, for instance, Refs. [20–22]. Sufficient conditions were derived for finite-time stability of linear fractional-order systems. Motivated by above discussion, now in this paper, the finite-time stability of fractional order cellular neural networks is studied. By Laplace transform, generalized Gronwall inequality and estimates of Mittag–Leffler functions, sufficient conditions ensuring the finite-time stability are established. Furthermore, results about the asymptotical stability are also derived. The main contribution is that test conditions for finite time stability of fractional networks are obtained, which will be helpful in design and application of fractional networks. To our best knowledge, results about such issue are few. Nowadays, dynamical behaviors about complex networks have been intensively investigated, for example, Refs. [23–25]. Similarly, research about their finite-time behaviors could also be carried out on complex networks. The rest is organized as follows. In Sec. 2, some necessary definitions and lemmas are presented and the fractional-order neural networks are given. Sufficient criteria ensuring the finite time stability of the model are derived in Sec. 3. A numerical example is presented in Sec. 4.

2 Preliminaries In this section some definitions and lemmas are recalled about fractional calculus. Definition 1 The fractional integral[1−2] (Riemann– Liouville integral) Dt−α with fractional order α ∈ R+ of 0 ,t function x(t) is defined as Z t 1 −α Dt0 ,t x(t) = (t − τ )α−1 x(τ )dτ , (1) Γ(α) t0 R∞ where Γ(·) is the gamma function, Γ(τ ) = 0 tτ −1 e −t dt.

Definition 2 The Riemann–Liouville derivative[1−2] of fractional order α of function x(t) is given as Z t dn 1 α (t−τ )n−α−1 x(τ )dτ , (2) RL Dt0 ,t x(t) = dtn Γ(n − α) t0 where n − 1 < α < n ∈ Z + .

Vol. 60

Definition 3 The Caputo derivative[1−2] of fractional order α of function x(t) is defined as follows Z t 1 α (t − τ )n−α−1 x(n) (τ )dτ , (3) C Dt0 ,t x(t) = Γ(n − α) t0 where n − 1 < α < n ∈ Z + . Definition 4 Mittag–Leffler function[1−2] is defined as ∞ X zk Eα (z) = , (4) Γ(kα + 1) k=0

where α > 0 and z ∈ C. The two-parameter Mittag–Leffler function has the following form ∞ X zk , (5) Eα,β (z) = Γ(kα + β) k=0

where α > 0, β > 0 and z ∈ C. When β = 1, one has Eα (z) = Eα,1 (z), further, E1,1 (z) = e z . Moreover, the Laplace transform of Mittag–Leffler function is sα−β L{tβ−1 Eα,β (−λtα )} = α , (R(s) > |λ|1/α ), (6) s +λ where t and s are, respectively, the variables in the time domain and Laplace domain, L{·} stands for the Laplace transform. Lemma 1[1] If 0 < α < 2, β is an arbitrary real number, µ satisfies πα/2 < µ < min{π, πα}, and C1 , C2 are real constants, then C2 |Eα,β (z)| ≤ C1 (1 + |z|)(1−β)/α exp(Re(z 1/α )) + , (7) 1 + |z| where | arg(z)| ≤ µ, |z| ≥ 0. Lemma 2[26] The following properties hold. (i) There exist constants M1 , M2 ≥ 1 such that for any 0 < α < 1, kEα,1 (Atα )k ≤ M1 k e At k , kEα,α (Atα )k ≤ M2 k e At k ,

(8)

where A denotes matrix, k·k denotes any vector or induced matrix norm. (ii) If α ≥ 1, then for β = 1, 2, α kEα,β (Atα )k ≤ k e At k . α

(9)

From Ref. [26], if A is a diagonal stability matrix, then there exists a constant N > 0 such that for t ≥ 0 kEα,β (Atα )k ≤ N e −ωt , α

kEα,β (At )k ≤ e

−ωt

,

0 < α < 1; 1 ≤ α < 2,

(10)

where ω is the largest eigenvalue of the diagonal matrix A. Lemma 3[27] Suppose α > 0, a(t) is a nonnegative function locally integrable on 0 ≤ t < T (some T ≤ +∞) and g(t) is a nonnegative, nondecreasing continuous function defined on 0 ≤ t < T , g(t) ≤ M (constant), and suppose

Communications in Theoretical Physics

No. 2

u(t) is nonnegative and locally integrable on 0 ≤ t < T with Z t u(t) ≤ a(t) + g(t) (t − s)α−1 u(s)ds , 0

on this interval. Then Z thX ∞ i (g(t)Γ(α))n (t − s)nα−1 a(s) ds . u(t) ≤ a(t) + Γ(nα) 0 n=1 Moreover, if a(t) is a nondecreasing function on [0, T ). then u(t) ≤ a(t)Eα,1 (g(t)Γ(α)tα ) . (11) Based on the definitions of integral and derivative, it is recognized that the integer derivative of a function i s only related to its nearby points, while the fractional derivative has relationship with all of the function history information. That is, the next state of a system not only depends upon its current state but also upon its historical state starting from the initial time. As a result, a model described by fractional-order equations possesses memory. So it is precise to describe the states of neurons. It should be noted that the initial conditions for fractional differential equations with Caputo derivatives take on the same forms as those for integer-order differentiation, which have well understood physical meanings. Here we deal with the fractional-order Caputo derivative and the notation Dα is α chosen as C D0,t . The dynamic behavior of the fractional-order cellular neural networks can be described by the following equation, Dα xi (t) = −ci xi (t) +

n X

aij (t)fj (xj (t))

j=1

+

n X

bij (t)gj (xj (t − τ )) + Ii (t) ,

j=1

or equivalently, Dα x(t) = −Cx(t) + A(t)F (x(t)) + B(t)G(x(t − τ )) + I(t) ,

(12)

where i = 1, 2, . . . , n, t ≥ 0, 0 < α < 2, n is the number of units in a neural network, x(t) = (x1 (t), . . . , xn (t))T ∈ Rn corresponds to the state vector at time t; F (x(t)) = (f1 (x1 (t)), . . ., fn (xn (t)))T , G(x(t)) = (g1 (x1 (t)), . . ., gn (xn (t)))T , fj (xj (t)) and gj (xj (t)) denote the activation function of the neurons; C = diag(ci ), A = (aij (t)), B = (bij (t)); ci represents the rate with which the i-th unit will reset its potential to the resting state in isolation when disconnected from the network and external

inputs; aij (t) and bij (t) are referred to as the connection of the j-th neuron to the i-th neuron at time t and t − τ , respectively, where τ > 0 is the transmission delay. I(t) = (I1 (t), . . . , In (t))T is an external bias vector. The initial condition is given by vector functions φj ∈ C([−τ, 0], Rn ), with φl (0) = x(l) (0) = xl , l = 0, [α], where C([−τ, 0], Rn ) denotes the Banach space of continuous n-real vector functions defined on the interval [−τ, 0], with the norm given by ||φ|| = sup−τ ≤s≤0 ||φ(s)||, [α] is the integer part of α. Assume that x(t) and y(t) are any two solutions of (5) with different initial conditions φl and ψl , let e(t) = (e1 (t), e2 (t), . . . , en (t))T and θl = φl − ψl , one has Dα e(t) = −Ce(t) + A(t)(F (y(t)) − F (x(t)) + B(t)(G(y(t − τ )) − G(x(t − τ ))) .

(13)

Definition 5 System (12) is said to be finite-time stable with respect to given positive numbers {δ, ε, T }, ε > δ if ||θl || < δ implies ||e(t)|| < ε,

t∈J,

where J is the interval [t0 , t0 + T ), t0 is the initial time of observation. Definition 6 System (12) is said to be stable if, for any initial values θl , there exist ε > 0 such that ke(t)k ≤ ε for all t > t0 . It is said to be asymptotically stable if, in addition to being stable, ke(t)k → 0 as t → +∞. In order to obtain main results, make the following assumptions. (A1) Suppose aij (t) and bij (t) are continuous and bounded functions defined on R+ , moreover, ci > 0, i, j = 1, · · · , n, From (A1), A(t) and B(t) are bounded, let A = supt≥0 ||A(t)||, B = supt≥0 ||B(t)|| and γ ≡ min1≤i≤n {ci }. (A2) The neuron activation functions F (x) and G(x) are Lipschitz continuous, that is, there exist positive constants F and G such that ||F (u) − F (v)|| ≤ F ||u − v|| , ||G(u) − G(v)|| ≤ G||u − v||,

∀u, v ∈ Rn ;

3 Finite Time Stability In this section, sufficient conditions for finite-time stability of fractional-order neural networks are derived. Theorem 1 When 1 < α < 2, if Assumptions (A1) and (A2) hold and

ε e −γt (1 + t)Eα ((AF + e γτ BG)Γ(α)tα ) < , δ then system (12) is finite-time stable.

191

t∈J,

(14)

Communications in Theoretical Physics

192

Proof

Vol. 60

By Laplace transform and inverse Laplace transform, system (13) is equivalent to Z t α α e(t) = Eα (−Ct )θ0 (0) + tEα,2 (−Ct )θ1 (0) (t − s)α−1 Eα,α (−C(t − s)α ) 0

× {A(s)[F (y(s)) − F (x(s))] + B(s)[G(y(s − τ )) − G(x(s − τ ))]}ds . From (A1), (A2), Lemma 2 and (10), one has ke(t)k ≤ (kθ0 k + kθ1 kt) e −γt +

Z

t

(t − s)α−1 e −γ(t−s) [AF ||e(s)|| + BG||e(s − τ )||]ds ,

0

Let u(t) = supt−τ ≤t˜≤t ||e(t˜)|| e γ t˜, then ||e(s)|| e γs ≤ u(s),

||e(s − τ )|| e γ(s−τ ) ≤ u(s) ,

therefore γt

e ke(t)k ≤ (kθ0 k + kθ1 kt) +

Z

t

(t − s)α−1 [AF + eγτ BG]u(s)]ds .

(15)

0

Note that the right hand side of the above inequality is nondecreasing, then Z t u(t) ≤ (kθ0 k + kθ1 kt) + (t − s)α−1 [AF + e γτ BG]u(s)ds ,

(16)

0

let a(t) = kθ0 k + kθ1 kt, g(t) = AF + e γτ BG, then from the generalized Gronwall inequality (11), it follows that when kθl k < δ u(t) ≤ (kθ0 k + kθ1 kt)Eα ((AF + e γτ BG)Γ(α)tα ) ≤ δ(1 + t)Eα ((AF + e γτ BG)Γ(α)tα ) . (17) So if (14) is satisfied, then ||e(t)|| < ε, t ∈ J, i.e., system (12) is finite-time stable. Corollary 1

If 1 < α < 2, (A1) and (A2) hold and γ α > (AF + e γτ BG)Γ(α) ,

(18)

then system (12) is asymptotically stable. Proof

From the above analysis and Lemma 1, one has ke(t)k ≤ e −γt (kθ0 k + kθ1 kt)Eα ((AF + e γτ BG)Γ(α)tα ) ≤ C1 (kθ0 k + kθ1 kt) exp{[(AF + e γτ BG)Γ(α)]1/α − γ} +

so e(t) → 0 as t → +∞. That means system (12) is asymptotically stable.

Fig. 1

C2 (kθ0 k + kθ1 kt) e −γt , 1 + (AF + e γτ BG)Γ(α)tα

asymptotically stable, respectively. Remark 1 If α = 1, then from Theorem 1 and Corollary 1 one could get the finite-time and asymptotic stability for integer-order neural models.

State trajectory of x1 versus t.

Corollary 2 When B(t) = 0, if (A1) and (A2) hold, and condition (14) or (18) is satisfied, then the fractionalorder neural model without delay is finite-time stable or

Fig. 2

Remark 2

State trajectory of x2 versus t.

For the case 0 < α < 1, the results also hold

Communications in Theoretical Physics

No. 2

true by the similar means, that is, under similar conditions, system (12) is also finite-time stable, and asymptotically stable.

4 Numerical Example In this section, a numerical example is presented to illustrate our results. Consider the following two-state fractional-order delayed Hopfield neural model described by Dα x1 (t) = −0.7x1 (t) − 0.1f (x1 (t)) + 0.05f (x2 (t)) − 0.1f (x1 (t − τ )) + 0.2f (x2 (t − τ )) , Dα x2 (t) = −0.6x2 (t) + 0.05f (x1 (t)) + 0.1f (x2 (t)) + 0.2f (x1 (t − τ )) + 0.1f (x2 (t − τ )) ,

(19)

where α = 1.5, τ = 0.1, the activation function is described by the function f (xj (t)) = fj (xj (t)) = gj (xj (t)) =

References [1] I. Podlubny, Fractional Differential Equations, Academic Press, San Diego (1999). [2] P.L. Butzer and U. Westphal, An Introduction to Fractional Calculus, World Scientific, Singapore (2000). ¨ [3] N. Ozalp and E. Demirci, Math. Comput. Model. 54 (2011) 1. [4] B. Lundstrom, M. Higgs, W. Spain, and A. Fairhall, Nat. Neurosci. 11 (2008) 1335. [5] T. Anastasio, Biol. Cybern. 72 (1994) 69. [6] G. Anastassiou, Comput. Math. Appl. 64 (2012) 1655. [7] E. Kaslika and S. Sivasundaram, Neural Networks 32 (2012) 245. [8] Q. Liu, and J. Cao, Neurocomputing 74 (2011) 3494. [9] R.C. Wu, Nonlinear Analysis: RWA 11 (2010) 562. [10] Z. Mao, H.Y. Zhao, and X.F. Wang, Physica D 234(1) (2007) 11. [11] P. Arena, L. Fortuna, and D. Porto, Phys. Rev. E 61 (2000) 776. [12] A. Boroomand and M. Menhaj, Lecture Notes in Computer Science 5506 (2009) 883. [13] X. Huang, Z. Zhao, Z. Wang, and Y.X. Lia, Neurocomputing 94 (2012) 13. [14] J. Yu, C. Hu, and H. Jiang, Neural Networks 35 (2012) 82.

193

0.5(|x + 1| − |x − 1|) (j = 1, 2), i.e.,     −0.7 0 −0.1 0.05 C= , A= , 0 −0.6 0.05 0.1   −0.1 0.2 B= . 0.2 0.1 Now Lj = Gj = 1, γ = 0.6, A = 0.1118, B = 0.2236, Γ(1.5) ≈ 0.8862, then it could be verified that (AF + e γτ BG)Γ(1.5) ≈ 0.3095. Let δ = 0.1, ε = 1, from the inequality ε e −γt (1 + t)Eα ((AF + e γτ BG)Γ(α)tα ) < , δ it could be obtained that the estimated time of finite-time stability is T ≈ 5.8543. Further, see that γ α ≈ 0.4648, i.e., condition (18) is also satisfied, so system is asymptotically stable, see Figs. 1 and 2.

[15] G. Kamenkov, J. Appl. Math. Mech. (PMM) 17(1953) 529. [16] A. Lebedev, J. Appl. Math. Mech. (PMM) 18 (1954) 75. [17] L. Lam and L. Weiss, J. Franklin Inst. 298 (1974) 421. [18] S. Bhat and D. Bernstein, IEEE Trans. Automat. Control 43 (1998) 678. [19] Q. Liu, J. Cao, and G. Chen, Neural Comput. 22 (2010) 2962. [20] L.Q. Liu and S.M. Zhong, Inter. J. Inform. Math. Sci. 6 (2010) 237. [21] M.P. Lazarevic and A.M. Spasic, Math. Comput. Model. 49 (2009) 475. [22] Z.X. Zhang and W. Jiang, Comput. Math. Appl. 62 (2011) 1284. [23] M. De la Sen, Fixed Point Theory. Appl. (2011) 8679321-867932-19. [24] H. Ye, J.D. Gao, and Y. Ding, J. Math. Anal. Appl. 328 (2007) 1075. [25] Y. Tang, H. Gao, W. Zou, and J. Kurths, IEEE Trans. Cybern. 43 (2013) 358. [26] Y. Tang, Z. Wang, and J. Fang, Chaos 19 (2009) 013112. [27] Y. Tang, Z. Wang, H. Gao, S. Swift, and J. Kurths, IEEE/ACM Trans. on Comput. Biology Bioinformatics 9 (2012) 1569.