Note on Superposition of Renewal Processes

Note on Superposition of Renewal Processes Suyono1) and J.A.M. van der Weide2) Jurusan Matematika, FMIPA, Universitas Negeri Jakarta, Jakarta 2) Depar...
Author: Walter Holland
3 downloads 0 Views 147KB Size
Note on Superposition of Renewal Processes Suyono1) and J.A.M. van der Weide2) Jurusan Matematika, FMIPA, Universitas Negeri Jakarta, Jakarta 2) Department DIAM, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, The Netherlands e-mail: [email protected] 1)

Received 30 June 2010, accepted for publication 19 July 2010 Abstrak Makalah ini membahas sifat-sifat distribusi dari superposisi proses renewal. Pertama, sebagai kasus khusus, dibahas distribusi probabilitas dari superposisi proses Poisson dan generalisasinya, yakni jumlahan proses Poisson majemuk yang saling independen. Kami berhasil memperoleh ekspresi eksplisit distribusi probabilitasnya. Selanjutnya dibahas momen-momen dari superposisi proses renewal secara umum. Di makalah ini juga dibahas momen-momen dari jumlahan proses renewal reward yang merupakan generalisasi dari superposisi proses renewal. Hasil-hasil yang diperoleh disajikan dalam bentuk transformasi Laplace. Akhirnya dalam makalah ini dibahas distribusi dari recurrence times dari superposisi proses renewal. Kata Kunci: Proses Poisson, Proses Poisson majemuk, Proses renewal, Proses renewal reward, Transformasi Laplace. Abstract In this paper we discuss distributional properties of superposition of renewal processes. Firstly, as a special case, we discuss probability distributions of a superposition of Poisson processes and its generalization, namely the sum of independent compound Poisson processes. In this case we have explicit expressions for their probability distributions. Secondly, we discuss the statistical moments of general superposition of renewal processes. We also discuss the statistical moments of the sum of independent renewal reward processes, a generalization of a supervision of renewal processes. The results are presented in the form of Laplace transforms. Finally we present the distributions of recurrence times of a superposition of renewal processes. Keywords: Poisson process, Compound Poisson process, Renewal process, Renewal reward process, Laplace transform of renewal processes was discussed by Grigelionis (1964). Torab and Kamen (2001) discussed an approximation of the distribution of superposition of renewal processes. In this paper we will discuss some additional properties of a superposition of renewal processes including the mean and higher moments. We may generalize a superposition of renewal processes. In the series system we discussed earlier, if we associate the maintenance cost to each maintenance action for a failure component, then the total maintenance cost of the component is then a renewal reward process. Hence the total maintenance cost of the series system is the sum of independent renewal reward processes. Of course, this sum is not a renewal reward process. We will also discuss this sum in this paper. This paper is organized as follows. In Section 2, we discuss probability distributions of a superposition of Poisson processes (a special case of superposition of renewal processes) and its generalization, namely the sum of independent compound Poisson processes. We present explicit expressions for their probability distributions. In Section 3 we discuss the statistical moments of a superposition of renewal processes and the sum of independent renewal reward processes. Finally in this

1. Introduction Consider a series system consisting of n independent components. When a component of the system fails then a maintenance action is immediately carried out. We assume that the maintenance action brings the component back to a state as good as new. So we can model the number of failures of a component of the system in a time interval [0, t] as a renewal process. Since the system consists of n independent components, the number of failures of the system in the time interval [0, t] equals the sum of the number of failures of each component in the time interval [0, t]. This sum is a superposition of renewal processes if we model the number of failures of each component as a renewal process. Some authors have discussed this process. Cox and Smith (1954) proved that, in general, a superposition of renewal processes is not a renewal process. In fact, the superposition of two independent renewal processes is a renewal process if and only if the two renewal processes are homogeneous Poisson processes. However, Drenick (1960) proved that the superposition of an infinite number of independent equilibrium renewal processes is a homogeneous Poisson process. Limit theorems for the superposition

93

94 JURNAL MATEMATIKA DAN SAINS, AGUSTUS 2010, VOL. 15 NOMOR 2 section we discuss recurrence superposition of renewal processes.

times

of

a

2. Superposition of Poisson Processes Since a homogeneous Poisson process is a special case of renewal processes, a superposition of (homogeneous) Poisson processes can be considered as a special case of superposition of renewal processes. In this section we will discuss the probability distribution of superposition of Poisson processes and its generalizations, namely the sum of independent compound Poisson processes. Consider n independent Poisson processes {Ni(t), t ≥ 0}, i = 1, 2, ..., n with rate λi, respectively. You can think of Ni(t) as the number of maintenance actions (or failures) in the time interval [0, t] for the type i component of a series system with n independent components. The superposition of the Poisson processes {Ni(t), t ≥ 0} is defined as the sum n

∑ N i (t ) .

N (t ) =

i =1

It is well known that the process {N(t), t ≥ 0} is a Poisson process with rate λ = λ1 + λ2 + … + λn. This means that the probability distribution of N(t) is given by Pr( N (t ) = k ) =

e − λt ( λt ) k , k = 0, 1, 2, ... , k!

see (Ross, 2000). Define Ri (t ) =

N i (t )

∑ Yij

independent components in the time interval [0, t]. By independent assumptions, the Laplace transform of R(t) is given by n

E [exp{− sR (t )}] =

∏ exp{−λit (1 − E[exp{−sYi1}])} . i =1

It follows that E[ R (t )] =

n

n

i =1

i =1

∑ λitE[Yi1 ] and Var[ R(t )] = ∑ λitE[Yi12 ] .

3. Superposition of Renewal Processes Before discussing superposition of renewal processes we recall the definition of a renewal process and present some results about it. 3.1 Renewal Processes Let Xi, i ≥ 1 be a sequence of i.i.d. nonnegative random variables with common distribution function F. Let Sn = X 1 + X 2 + L + X n , n ≥ 1, and S0 = 0. The counting process {N(t), t ≥ 0} defined by N (t ) = sup{n ≥ 0 | S n ≤ t} =



∑1{S ≤t} n =1

n

is called a renewal process. The double Laplace transform of N(t) is ∞

∫ E[e

− vN (t )

]e − st dt =

0

1 − F * (s) s[1 − e − v F * ( s )]

,

(3.1)

see (Suyono, 2003), where F * ( s ) denotes the Laplace-Stieltjes transform of F, i.e. ∞

j =1



F * ( s ) = e − st dF (t )] .

where for every i, the sequence (Yij ) , j = 1, 2, …,

0

are nonnegative and i.i.d. random variables independent on Ni(t). You may consider the random variable Yij as the maintenance cost of the type i

Using uniqueness theorem for power series (Bartle, 1964), we can derive the Laplace transform of the probability distribution of N(t) from (3.1), i.e.

component after the jth failure. So the random variable Ri(t) denotes the total maintenance cost of the type i component of the series system in the time interval [0, t]. The process {Ri(t), t ≥ 0} is known as a compound Poisson processes. By conditioning on the number of maintenance actions we can easily show that the Laplace transform of Ri(t) is given by

∫ Pr( N (t ) = k )e

E [exp{− sRi (t )}] = exp{−λi t (1 − E[exp{− sYi1 }])},

∞ 0



∫ E[ N (t )]e 0

E[ Ri (t )] = λitE[Yi1 ] and Var[ Ri (t )] = λitE[Yi12 ] .

0

n

∑ Ri (t ) . i =1

We can interpret the random variable R(t) as the total maintenance cost of the series system with n

− st

dt =

F * (s)

(3.2)

s[1 − F * ( s )]

and ∞

R (t ) =

1 dt = [1 − F * ( s )]F * ( s ) k , k = 0, 1, 2,L s

From (3.1) we can also derive the Laplace transforms of the moments of N(t), for example,

from which the mean and the variance of Ri(t) respectively are Now define

− st

∫ E[ N

2

(t )]e − st dt =

F * ( s )[1 + F * ( s )] s[1 − F * ( s )]2

.

(3.3)

The expected value of N(t) can also be represented as follows: ∞



n =1

n =1

M (t ) ≡ E[ N (t )] = ∑ Pr( Sn ≤ t ) = ∑ F ( n) (t ) (n)

where F (t) is the n-fold convolution of F with itself, that is

95

Suyono and van der Weide, Note on Superposition of Renewal Processes

x

F (1) = F , F ( n ) ( x) = ∫ F ( n −1) ( x − u )dF (u ) . 0

The function M(t) is called the renewal function. If the distribution of the Xi has a density f, then the random variable Sn also have densities given by the convolution powers of the density f:

E[φ (W (t ))] = E[φ ( S N (t ) +1 − t )] ∞

[

= ∑ E φ ( S N (t ) +1 − t )1{ N (t ) = n}

f S n (u ) = f ( n) (u ), n ≥ 1,

n=0 ∞

where x

f (1) = f ,

Proof: Let φ be a nonnegative bounded Borel function. Then

f ( n +1) ( x) = ∫ f ( n ) ( x − u ) f (u )du . 0

It follows that

[

= ∑ E φ ( S N (t ) +1 − t )1{S n ≤t ≤ S n+1 } n=0 ∞

t

m(t ) = ∑ f

]

n =1

E[φ ( S N (t ) +1 − t )1{S N ≤t ≤ S

where ( n)

[



Since for n ≥ 1

0



]

= ∫ φ ( y ) f ( y + t )dy + ∑ E φ ( S N (t ) +1 − t )1{S n ≤t ≤ S n+1 } 0

M (t ) = ∫ m(u )du

]

(t ) .

(3.4)

n =1

Recurrence Times of a Renewal Process

Let t > 0 be fixed. The forward recurrence time (FRT) is defined by W (t ) = S N (t ) +1 − t ,

and the backward recurrence time (BRT) by W−1 (t ) = t − S N (t ) .

n +1}

]

t ⎡∞ ⎤ = ∫ ⎢ ∫ φ ( s + x − t ) f ( n ) ( s ) f ( x)dx ⎥ ds ⎥⎦ 0⎢ ⎣t − s t ⎡∞ ⎤ = ∫ ⎢ ∫ φ ( y ) f ( n ) ( s ) f ( y + t − s )dy ⎥ ds ⎥⎦ 0⎢ ⎣0 ∞ ⎤ ⎡t = ∫ φ ( y ) ⎢ ∫ f ( n ) ( s ) f ( y + t − s )ds ⎥dy. ⎥⎦ ⎢⎣ 0 0 It follows by (3.4) that

E[φ (W (t ))]

The distribution function of FRT is given by, Pr(W (t ) ≤ w) t



= F (t + w) − [1 − F (t + w − u )]dM (u ),

(3.5)

0

see (Grimmett and Stirzaker, 1997). If G ( w, t ) denotes the tail probability or the survival function of W(t), i.e., G(w,t) = 1 − G(w,t) = Pr(W(t) > w) then it follows from (3.5) that t



G ( w, t ) = F ( w + t ) + F ( w + t − u )dM (u ) . 0

In addition, if F has a density f then t



fW (t ) ( w) = f ( w + t ) + m(u ) f ( w + t − u )du . 0

The following proposition is concerning the expected value of a function of W(t). Proposition 3.1 For a nonnegative bounded Borel function φ ∞ t ⎡ ⎤ E[φ (W (t ))] = ∫ φ ( y ) ⎢ f ( y + t ) + ∫ m( s) f ( y + t − s)ds ⎥ dy ⎢⎣ ⎥⎦ 0 0

∞ t ⎤ . ⎡ = ∫ φ ( y ) ⎢ f ( y + t ) + ∫ m( s ) f ( y + t − s )ds ⎥ dy ⎥⎦ ⎢⎣ 0 0



The tail probability of BRT is given by

Pr(W−1 (t ) > w)

t −w ⎧ ⎪1 − F (t ) + [1 − F (t − u )]dM (u ), if u ≤ t =⎨ 0 ⎪ 0, if u > t ⎩



(3.6)

see also (Grimmett and Stirzaker, 1997). So the distribution of the BRT W−1 (t ) is concentrated on [0, t] and has an atom at t with mass Pr(W−1 (t ) = t ) = F (t ) .

It follows from (3.6) that Pr(W−1 (t ) ≤ x) x

= ∫ F (u )m(t − u )du , 0 < x < t

(3.7)

0

Similar to that of W(t) we have the following property of W−1 (t ) : Proposition 3.2 For any nonnegative Borel function φ with φ(t) = 0

96 JURNAL MATEMATIKA DAN SAINS, AGUSTUS 2010, VOL. 15 NOMOR 2

t

E[φ (W−1 (t ))] = ∫ φ (u )m(t − u ) F (u )du 0

Proof: Note that ∞

E [φ (W−1 (t )] = ∑ E[φ (t − S n )1{S n ≤ t ≤ S n + X n+1 } ] . n =1

Since E[φ (t − S N (t ) )1{S n ≤t ≤ S n+1 + X n+1 } ]

The process {N(t), t ≥ 0} is in general not a renewal process. The probabilistic law of the superposition of renewal processes, in general, is unknown. In the following we will discuss the statistical moments of this process. Firstly we will consider the mean of N(t) defined in (3.10). By linearity property of expected value we have n

E[ N (t )] = ∑ E[ Ni (t )]

⎡∞ ⎤ = ⎢ φ (t − s ) f ( n ) ( s ) f ( x)dx ⎥ ds ⎥⎦ 0⎢ ⎣t − s

∫ ∫

To calculate E[Ni(t)], first we use (3.2) to obtain ∞

− st ∫ E[ Ni (t )]e dt =

t



= φ (t − s ) f ( n) ( s ) F (t − s )ds 0 t



= φ ( y) f

( n)

0

(t − y ) F ( y )dy

The result follows by summing over n ≥ 1 and formula (3.4). ■

E[ N 2 (t )]

The following lemma gives a relation between probability laws of FRT and BRT.

i =1

Lemma 3.1 For x ∈ [0, t)

Proof: Follows directly from



It follows from (3.7) and Lemma 3.1 that ∂ G ( x; t − x) = − F ( x)m(t − x), x < t. ∂x

(3.8)

Since {W−1 (t ) > x,W (t ) > w} = {W (t − x) > x + w}

we have the following lemma. Lemma 3.2 The joint probability distribution of FRT and BRT (W−1 (t ),W (t )) is given by Pr(W−1 (t ) > x,W (t ) > w) = G ( w + x; t − x ) .(3.9)

3.2 The Statistical Moments of a Superposition of Renewal Processes Consider n independent renewal processes {Ni(t), t ≥ 0}, i = 1, 2, ..., n with renewal distributions Fi respectively. As before, we may interpret Ni(t) as the number of maintenance actions for the type i component of a series system in the time interval [0, t]. The superposition of the n renewal processes is defined as the sum n

i =1

=

(3.12)

n

n

∑ E[ N i2 (t )] + 2∑ E[ N i (t )]E[ N j (t )]

(3.10)

(3.13)

i< j

2

Similarly, to calculate E[N (t)], first we use (3.3) to obtain ∞

Pr(W−1 (t ) > x ) = Pr(W (t − x) > x = G ( x; t − x)

N (t ) = ∑ Ni (t )

Fi* ( s ) s[1 − Fi* ( s )]

and then invert this Laplace transform yet E[Ni(t)]. For the second moment of R(t), from (3.10) and independent assumptions, we get

0

{W−1 (t ) > x} = {W (t − x) > x}.

(3.11)

i =1

t

2 − st ∫ E[ Ni (t )]e dt = 0

Fi* ( s )[1 + Fi* ( s )] s[1 − Fi* ( s )]2

and then invert this Laplace transform yet E[N2(t)]. If these Laplace transforms are rational functions of s then we can invert the transforms analytically, otherwise we have to invert the transforms numerically. For numerical inversions of Laplace transforms, see (Abate and Whitt, 1992). We can also calculate the higher moments of the superposition of renewal processes because we can derive the Laplace transforms of the higher moments of a renewal process from (3.1). Similar to the definition of compound Poisson process, define Ri (t ) =

N i (t )

∑ Yij j =1

where Ni(t) are renewal processes and for every i the sequence Yij , j = 1, 2, … are i.i.d. non-negative random variables independent on Ni(t). The process (Ri(t), t ≥ 0) is called a renewal reward process. The double Laplace transform of Ri(t) is given by ∞

∫ E[exp{−vRi (t )}]e

− st

dt

0

=

1 − Fi* ( s ) s[1 − Fi* ( s ) E[exp{−vYi1 }]

(3.14) ,

where Yi1 is the first reward in the renewal reward process {Ni(t), t ≥ 0), see (Suyono, 2003). From (3.14) we can derive the higher moments of Ri(t), for

Suyono and van der Weide, Note on Superposition of Renewal Processes example, the Laplace transform of E[Ri(t)] is given by ∞



E[ Ri (t )]e − st dt =

0

E[Yi1 ]Fi* ( s ) s[1 − Fi* ( s )]

and the Laplace transform of ∞



.

E[Ri2(t)]

(3.15)

The expected value of N(t) equals the sum of E[N1(t)] and E[N2(t)]. For the second moments, using (3.13) we obtain ∞

∫ E[ N1 (t )]e

is given by



Fi* ( s ) ⎡ 2 ⎢ E[Yi1 ] + s[1 − Fi* ( s )] ⎣⎢

2 E[Yi1 ]

2



Fi* ( s) ⎤

⎥ ⎦⎥ (3.16)

1 − Fi* ( s )

n

R (t ) = ∑ Ri (t )

(3.17)

i =1

Then as for superposition of renewal processes we can calculate the moments of R(t). Example 3.1 Consider a series system with two independent components: Type I and Type II. Let N1(t) and N2(t) be the number of maintenance actions of type I and type II, respectively, in the time interval [0, t]. We will model the process {N1(t), t ≥ 0} and {N2(t), t ≥ 0}as a renewal processes. Suppose that the inter-arrival times of N1(t) and N2(t) both have gamma distributions the shape parameter 2 and location parameter 1, and a density f1 ( x) = xe − x , x > 0 , and the shape parameter 2 and location parameter 3, and a density f 2 ( x) = 2 x 2 e −2 x , x > 0 , respectively. Let {N(t), t ≥ 0} be the superposition of N1(t) and N2(t), i.e. N(t) = N1(t) + N2(t). Firstly, we calculate the mean of N(t). The Laplace transforms of the distribution functions of the inter-arrival times of the processes N1(t) and N2(t) are, respectively, 2

0

∫ E[ N1(t )]e

− st

dt =

0

1 (2 + s) s 2

∫ E[ N 2 (t )]e 0

− st

dt =

E[ N 2 (t )] =

)

E[Y11 ] = 1 / 2, E[Y112 ] = 1 / 3,

and E[Y21 ] = E[Y212 ] = 1 / 2 .

Now using (3.15) we obtain ∞

∫ E[ R1(t )]e

− st

dt =

0

1 2( 2 + s ) s 2

and ∞

∫ E[ R2 (t )]e

− st

dt =

0

4 . (12 + 6 s + s 2 ) s 2

Inverting these transforms we obtain E[ R1 (t )] =

1 1 1 t − + e − 2t 4 8 8

(

)

1 1 1 E[ R2 (t )] = t − + e−3t 3 cos( 3t ) + 3 sin( 3t ) . 3 6 18 The expected value of R(t) equals the sum of E[R1(t)] and E[R2(t)]. To calculate the second moments of R(t), firstly, using (3.16) we obtain

8 . (12 + 6 s + s 2 ) s 2

1 1 1 t − + e − 2t 2 4 4



∫ 0

(

(

and

Inverting these transforms we obtain E[ N1 (t )] =

8(16 + 12s + 6s 2 + s 3 ) (12 + 6 s + s 2 ) s 2 (2 + s )3

4 5 2 t − + e − 3t 3 cos( 3t ) + 3 sin( 3t ) 3 3 9 . 2 − 2t + (2t + 2t + 1)e Then use (3.13), with n = 2, to calculate the second moment of N(t). Similarly we can calculate the first and second moments of R(t) defined in (3.22), but firstly we have to specify the distribution of Y11 and Y21. Suppose Y11 and Y21 are uniformly distributed in the interval [0,1] and exponentially distributed with parameter 2, respectively. It follows that E[ N 22 (t )] =

3

and ∞

2 + 2s + s 2 (2 + s ) s 2 (1 + s ) 2

3 1 − 2t + e + (t + 1)e −t 2 2

and

⎛ 1 ⎞ ⎛ 2 ⎞ * =⎜ ⎟ and F2 ( s ) = ⎜ ⎟ . (3.18) ⎝ s +1⎠ ⎝ s+2⎠

Substituting (3.18) to (3.12) with i = 1 and 2 we obtain

and

E[ N 22 (t )]e − st dt =

E[ N12 (t )] = t −



dt =

Inverting these transforms we obtain

Define

F1* ( s )

− st

and

0

=

2

0

E[ Ri2 (t )]e − st dt

97

)

2 1 1 −3t t − + e 3 cos( 3t ) + 3 sin( 3t ) . 3 3 9

and

E[ R12 (t )]e − st dt =

6 + 2s + s 2 3(2 + s ) s 3

98 JURNAL MATEMATIKA DAN SAINS, AGUSTUS 2010, VOL. 15 NOMOR 2 ∞

∫ E[ R2 (t )]e 2

− st

dt =

0

4(8 + 12 s + 6 s 2 + s 3 ) 2

2

(12 + 6s + s ) s (2 + s )

3

.

Inverting these transforms we obtain 1 1 E[ R12 (t )] = (2t 2 − 4t + 3) + (2t − 1) 8 12 1 − 2t − (6t + 7)e 24 and 1 1 E[ R22 (t )] = (6t 2 − 12t + 7) + (2t − 1) 54 6 . 1 + cos( 3t ) − 2 3t sin( 3t ) e − 3t 27 Then to obtain E[R2(t)] we use the following identity: E[ R 2 (t )] = E[ R12 (t )] + E[ R22 (t )] + 2 E[ R1 (t )]E[ R2 (t )] .

(

We continue with conditional distributions of the superposition process N(t) given that there was a failure at time t. Firstly, we calculate the conditional probability that the failure is associated with process i. The event that there is a failure at time t associated with process i can be expressed as ⎛ ⎞ ⎜ {W ( j ) (t ) > W (i ) (t )}⎟ − − 1 1 ⎜ ⎟ ⎝ j ≠i ⎠

I

where

Consider the superposition of renewal processes N(t) defined in (3.11). For fixed t > 0, the FRT of the superposition process N(t) is defined by

W (t ) = min{W (i ) (t )} = min{S N i (t ) +1} − t , i

i

(i )

where W (t ) denotes the FRT for the ith renewal process Ni(t) as defined in Sub-Section 3.1. The tail probability (survival function) of W(t) is given by

I

n

i =1

So

the conditional tail probability (i ) > W−1 (t ), j ≠ i | W−1 (t ) = 0+) that the failure at time t is associated with process i, given that there is a failure at time t, is given by Pr(W−(1j ) (t )

lim Pr(W−(1j ) (t ) > W−(1i ) (t ), j ≠ i | W−1 (t ) ≤ ε n )

n →∞

⎛ ⎞ ⎜ G (ε ; t − ε ) ⎟ F (ε )m (t − ε ) j n n n ∏ ⎜ j ≠i ⎟ i n i ⎠ = lim ⎝ n n →∞ ⎞ ∂ ⎛⎜ 1 − ∏ Gk ( x; t − x) ⎟ |x =ε n ⎟ ∂x ⎜⎝ k =1 ⎠ m (t ) = n i ∑k =1 mk (t )

i =1

. The BRT of N(t) is defined by i

i

denotes the BRT for the ith renewal where process Ni(t) as defined in Sub-Section 3.1. The backward recurrence time W-1(t) takes values in [0, t] and has an atom at t n

Pr(W−1 (t ) = t ) = ∏ Fi (t )

The following proposition is concerning the conditional tail distribution of W(t) given W−1 (t ) . Proposition 3.3 Pr(W (t ) > w | W−1 (t )) = g (W−1 (t )), where g (a)

i =1

=

It follows from Lemma 2.1 that for x < t G−1 ( x; t , n) = Pr(W−1 (t ) > x) =

ε n ↓ 0 . For every ε n > 0,

Pr(W−1 (t ) ≤ ε n ) = 1 − ∏ Gi (ε n ; t − ε n )

n

W−(1i ) (t )



n

and

∏ Gi (w, t )

W−1 (t ) = min{W−(1i ) (t )} = t − max{S N i (t ) } ,



ε n⎛ ⎞ = ∫ ⎜ ∏ G j ( x; t − x) ⎟ Fi ( x)mi (t − x)dx ⎜ ⎟ 0 ⎝ j ≠i ⎠

G ( w, t , n) = Pr(W (t ) > w) ⎛ n ⎞ = Pr⎜ {W (i ) (t ) > w}⎟ = ⎜ ⎟ ⎝ i =1 ⎠



Pr(W−(1j ) (t ) > W−(1i ) (t ), j ≠ i,W−1 (t ) ≤ ε n )

)

3.3 Forward and Backward Recurrence Times of Superposition Renewal Processes



I ⎜⎜ I{W−1 (t ) ≤ ε n ⎟⎟

(3.19)

n

∏ Gi ( x; t − x) = G ( x; t − x, n) i =1

G ( w + a; t − a , n ) G ( a; t − a , n )

mi (t − a ) Fi ( w + a ) i =1 Gi ( w + a; t − a ) n



mi (t − a ) Fi (a ) i =1 Gi ( a; t − a ) n



Proof: Find a Borel function g: R → R such that for all a ∈ R P(W (t ) > w,W−1 (t ) ≥ a) = E[ g (W−1 (t ))1{W−1 (t ) ≥ a} ] .

Now

and from (3.8) that

n

{W (t ) > w,W−1 (t ) ≥ a} = I{Wi (t ) > w,W−1,i (t ) ≥ a} ,

∂ G ( x; t − x, n) ∂x

i =1

n

= −G ( x; t − x, n)∑ i =1

Fi ( x)mi (t − x) . Gi ( x; t − x )

(3.20)

so by (3.9) we get

Suyono and van der Weide, Note on Superposition of Renewal Processes P (W (t ) > w, W−1 (t ) ≥ a)

ε

∫ g (u )dFW (t ) (u ) = lim 0 ε ε ↓0 ∫0 dFW (t ) (u) −1

n

= ∏ Gi ( w + a; t − a)

−1

i =1

= g (0)

= G ( w + a; t − a, n)

Furthermore by (3.19) and (3.20)

= G ( w; t , n)

E[ g (W−1 (t ))1{W−1 (t ) ≥ a} ]

i =1

n

Pi (t ) = mi (t ) /

i =1

+

n t

∑ ∫ g ( x)

F j ( x)m j (t − x)

j =1 a

G j ( x; t − x)

G ( x; t − x, n)dx

n

∏ Fi (t ) + i =1

+

n t

∑ ∫ g ( x)

F j ( x)m j (t − x)

j =1 a

G j ( x; t − x)

G ( x; t − x, n)dx

for all a. It follows, by differentiating with respect to a, that n F ( w + a ) m (t − a ) j j G ( w + a; t − a, n)∑ G ( w + a ; t − a) j j =1 = g (a)

n

F j (a )m j (t − a)

j =1

G j ( a; t − a )



G (a; t − a, n)

Hence g (a) =

G ( w + a; t − a , n ) G (a; t − a, n)

n

∑ i =1

mi (t − a) Fi ( w + a) Gi ( w + a; t − a )

mi (t − a) Fi (a ) ■ i =1 Gi ( a; t − a ) It follows from Proposition 3.3 that Pr(W (t ) > w | W−1 (t ) = 0+) n



= lim Pr(W (t ) > w | W−1 (t ) ≤ ε ) ε ↓0

= lim ε ↓0

= lim ε ↓0

= lim ε ↓0

F ( w) i

Pr(W (t ) > w, W−1 (t ) ≤ ε ) Pr(W−1 (t ) ≤ ε )

E[Pr(W (t ) > w | W−1 (t )1[ 0,ε ] (W−1 (t ))] Pr(W−1 (t ) ≤ ε ) E[ g (W−1 (t )1[ 0,ε ] (W−1 (t ))] Pr(W−1 (t ) ≤ ε )

n

∑ m j (t ),

i = 1,..., n .

j =1

So the function g has to satisfy the identity G ( w + a; t − a, n) = g (t )

n

∑ G i(w; t ) Pi (t )

where

∏ Fi (t ) +

= g (t )

99

Here Pi(t) can be interpreted as the probability that the first occurrence after time t. Acknowledgments

We thank I-MHERE project of Universitas Negeri Jakarta year 2009 (IBRD Loan No. 4789-IND & IDA Credit No. 4077-IND) that has financed this research. References

Abate, J. and W. Whitt, 1992, The Fourier-series method for inverting transforms of probability distributions, Queuing Systems 10, 5 – 88. Bartle, R.E., 1964, The Elements of Real Analysis, John Wiley, New York. Cox, D.R. and W. L. Smith, 1954, On the superposition of renewal processes, Biometrika, 41, 91-99. Drenick, R.F., 1960, The failure law of complex equipment, J. Soc. Indust. Appl. Math., 8, 680-690. Grigelionis, B.I., 1964, Limit Theorems for Sums of Renewal Processes, Energy publishing house. Grimmett, G.R. and D. R. Stirzaker, 1997, Probability and Random Processes, Clarendon Press, Oxford. Ross, S.M., 2000, Introduction to Probability Models, 7th ed., Academic Press, San Diego. Suyono, 2003, Renewal Processes and Repairable System, Dissertation, Delft University Press, The Netherlands. Torab, P. and E. W. Kamen, 2001, On approximate renewal models for the superposition of renewal processes. In IEEE Cenference on Communications (ICC 2001), Helsinki, Finland, June 11 – 14, 2001, 9, 2901-2906.

100

JURNAL MATEMATIKA DAN SAINS, AGUSTUS 2010, VOL. 15 NOMOR 2