CONTINUOUS DISTRIBUTIONS Laplace transform (Laplace-Stieltjes transform)

J. Virtamo 38.3143 Queueing Theory / Continuous Distributions 1 CONTINUOUS DISTRIBUTIONS Laplace transform (Laplace-Stieltjes transform) Definition...
Author: Olivia Freeman
0 downloads 1 Views 118KB Size
J. Virtamo

38.3143 Queueing Theory / Continuous Distributions

1

CONTINUOUS DISTRIBUTIONS Laplace transform (Laplace-Stieltjes transform) Definition The Laplace transform of a non-negative random variable X ≥ 0 with the probability density function f (x) is defined as ∗

f (s) =

Z

∞ −st e f (t)dt 0

= E[e

−sX

]

=

∞ −st e dF (t) 0

Z

also denoted as LX (s)

• Mathematically it is the Laplace transform of the pdf function. • In dealing with continuous random variables the Laplace transform has the same role as the generating function has in the case of discrete random variables. – if X is a discrete integer-valued (≥ 0) r.v., then f ∗(s) = G(e−s)

J. Virtamo

38.3143 Queueing Theory / Continuous Distributions

Laplace transform of a sum Let X and Y be independent random variables with L-transforms fX∗ (s) and fY∗ (s). ∗ fX+Y (s) = E[e−s(X+Y )]

= E[e−sX e−sY ] = E[e−sX ]E[e−sY ] = fX∗ (s)fY∗ (s) ∗ fX+Y (s) = fX∗ (s)fY∗ (s)

(independence)

2

J. Virtamo

38.3143 Queueing Theory / Continuous Distributions

Calculating moments with the aid of Laplace transform By derivation one sees d f ∗0 (s) = E[e−sX ] = E[−Xe−sX ] ds Similarly, the nth derivative is dn ∗(n) f (s) = n E[e−sX ] = E[(−X)n e−sX ] ds Evaluating these at s = 0 one gets E[X]

= −f ∗ 0 (0)

E[X 2] = +f ∗ 0 0 (0) ... E[X n] = (−1)n f ∗(n)(0)

3

J. Virtamo

38.3143 Queueing Theory / Continuous Distributions

4

Laplace transform of a random sum Consider the random sum Y = X1 + · · · + XN

where the Xi are i.i.d. with the common L-transform fX∗ (s) and N ≥ 0 is a integer-valued r.v. with the generating function GN (z). fY∗ (s) = E[e−sY ] 

= E[E e 

−sY



|N ]

(outer expectation with respect to variations of N) 

= E[E e−s(X1+···+XN ) | N ]

(in the inner expectation N is fixed)

= E[E[e−s(X1)] · · · E[e−s(XN )]]

(independence)

= GN (fX∗ (s))

(by the definition E[z N ] = GN (z))

= E[(fX∗ (s))N ]

J. Virtamo

38.3143 Queueing Theory / Continuous Distributions

5

Laplace transform and the method of collective marks We give for the Laplace transform f ∗(s) = E[e−sX ],

X ≥ 0,

the following

Interpretation: Think of X as representing the length of an interval. Let this interval be subject to a Poissonian marking process with intensity s. Then the Laplace transform f ∗(s) is the probability that there are no marks in the interval. P{X has no marks} = E[P{X has no marks | X}]

(total probability)

= E[P{the number of events in the interval X is 0| X}] = E[e−sX ] = f ∗(s)

ì í î

intensiteetti s

X

P{there are n events in the interval X | X}

=

(sX)n −sX e n!

P{the number of events in the interval X is 0 | X} = e−sX

J. Virtamo

38.3143 Queueing Theory / Continuous Distributions

6

Method of collective marks (continued) Example: Laplace transform of a random sum Y = X1 + · · · + XN ,       

where

X1 ∼ X2 ∼ · · · ∼ XN , common L-transform f ∗(s) N is a r.v. with generating function GN (z)

intensiteetti s

fY∗ (s) = P{none of the subintervals of Y is marked} =

) GN ( fX∗ (s) | {z } probability that a single subinterval has no {zmarks | } probability that none of the subintervals is marked

X1 X2

...

XN

J. Virtamo

38.3143 Queueing Theory / Continuous Distributions

Uniform distribution X ∼ U(a, b) The pdf of X is constant in the interval (a, b):    

f (x) =   

1 b−a

a t} P{X > t + x} = P{X > t} e−λ(t+x) = = e−λx = P{X > x} −λt e

P{X > t + x | X > t} =

P{X > t + x | X > t} = P{X > x} • The distribution of the remaining duration of the call does not at all depend on the time the call has already lasted • Has the same Exp(λ) distribution as the total duration of the call.

exp(- λ t)

exp(- λ (t-u))

u

t

J. Virtamo

38.3143 Queueing Theory / Continuous Distributions

12

Example of the use of the memoryless property A queueing system has two servers. The service times are assumed to be exponentially distributed (with the same parameter). Upon arrival of a customer () both servers are occupied (×) but there are no other waiting customers.

´ ¨

´

The question: what is the probability that the customer () will be the last to depart from the system? The next event in the system is that either of the customers (×) being served departs and the customer enters () the freed server.

¨

´

By the memoryless property, from that point on the (remaining) service times of both customers () and (×) are identically (exponentially) distributed. The situation is completely symmetric and consequently the probability that the customer () is the last one to depart is 1/2.

J. Virtamo

38.3143 Queueing Theory / Continuous Distributions

The ending probability of an exponentially distributed interval Assume that a call with Exp(λ) distributed duration has lasted the time t. What is the probability that it will end in an infinitesimal interval of length h? P{X ≤ t + h | X > t} = P{X ≤ h} (memoryless) = 1 − e−λh

= 1 − (1 − λh + 21 (λh)2 − · · ·)

= λh + o(h)

The ending probability per time unit = λ

(constant!)

13

J. Virtamo

38.3143 Queueing Theory / Continuous Distributions

14

The minimum and maximum of exponentially distributed random variables Let X1 ∼ · · · ∼ Xn ∼ Exp(λ)

(i.i.d.)

The tail distribution of the minimum is P{min(X1 , . . . , Xn) > x} = P{X1 > x} · · · P{Xn > x}

(independence)

= (e−λx )n = e−nλx

The minimum obeys the distribution Exp(nλ). n parallel processes each of which ends with intensity λ independent of the others

The cdf of the maximum is X2

P{max(X1 , . . . , Xn) ≤ x} = (1 − e−λx )n

X3

ì í î

ì í î

ì í î

X5 ì í î

The expectation can be deduced by inspecting the figure 1 1 1 + + ···+ E[max(X1 , . . . , Xn)] = nλ (n − 1)λ λ

X1

ì í î

The ending intensity of the minimum = nλ

~Exp(nl) ~Exp((n-1)l) ~Exp((n-2)l)

~Exp(l)

X4

J. Virtamo

38.3143 Queueing Theory / Continuous Distributions

Erlang distribution X ∼ Erlang(n, λ)

15

Also denoted Erlang-n(λ).

X is the sum of n independent random variables with the distribution Exp(λ) X = X1 + · · · + Xn

Xi ∼ Exp(λ)

(i.i.d.)

The Laplace transform is f ∗(s) = (

λ n ) λ+s

By inverse transform (or by recursively convoluting the density function) one obtains the pdf of the sum X (λx)n−1 −λx f (x) = λe (n − 1)!

x≥0

J. Virtamo

38.3143 Queueing Theory / Continuous Distributions

16

Erlang distribution (continued): gamma distribution The formula for the pdf of the Erlang distribution can be generalized, from the integer parameter n, to arbitrary real numbers by replacing the factorial (n − 1)! by the gamma function Γ(n): (λx)p−1 −λx λe f (x) = Γ(p)

Gamma(p, λ) distribution

Gamma function Γ(p) is defined by Γ(p) = 0.5 0.4

By partial integration it is easy to see that when p is an integer then, indeed, Γ(p) = (p − 1)!

∞ −u p−1 e u du 0

Z

The expectation and variance are n times those of the Exp(λ) distribution:

n=1 n=2

0.3

n=3

n=4

0.2

E[X] =

n=5

0.1 0

2

4

6

8

10

n λ

V[X] =

n λ2

J. Virtamo

38.3143 Queueing Theory / Continuous Distributions

17

Erlang distribution (continued) ~Erlang(2,l)

ì í î

Example. The system consists of two servers. Customers arrive with Exp(λ) distributed interarrival times. Customers are alternately sent to servers 1 and 2.

~Exp(l)

The interarrival time distribution of customers arriving at a given server is Erlang(2, λ). Proposition. Let Nt , the number of events in an interval of length t, obey the Poisson distribution: Nt ∼ Poisson(λt)

Then the time Tn from an arbitrary event to the nth event thereafter obeys the distribution Erlang(n, λ). 1

2

3

n

... t

ì í î

0 Nt tapahtumaa

Tn

Proof. FTn (t) = P{Tn ≤ t} = P{Nt ≥ n} (λt)i −λt = P{Nt = i} = e i=n i=n i! ∞ X

fTn =

d F (t) dt Tn

∞ X

∞ (λt)i iλ (λt)i−1 −λt X = e − λe−λt i! i=n i=n i! ∞ X

∞ (λt)i (λt)i−1 −λt X = λe − λe−λt i=n (i − 1)! i=n i! ∞ X

(λt)n−1 −λt λe = (n − 1)!

J. Virtamo

38.3143 Queueing Theory / Continuous Distributions

18

Normal distribution X ∼ N(µ, σ 2) The pdf of a normally distributed random variable X with parameters µ ja σ 2 is f (x) = √

Parameters µ and σ 2 are the expectation and variance of the distribution

1 1 2 2 e− 2 (x−µ) /σ 2πσ

Proposition: If X ∼ N(µ, σ 2), then Y = αX + β ∼ N(αµ + β, α2σ 2). Proof: FY (y) = P{Y ≤ y} = P{X ≤

Seuraus:

=

Z

(y−β)/α

=

Z

y

−∞

y−β } α

= FX ( y−β ) α

1 2 2 √ 1 e− 2 (x−µ) /σ dx 2πσ

z = αx + β

− 21 (z−(αµ+β))2 /(ασ)2 √ 1 e dz −∞ 2π(ασ)

Z=

X −µ ∼ N(0, 1) σ

(α = 1/σ, β = −µ/σ)

Denote the pdf of a N(0,1) random variable by Φ(x). Then FX (x) = P{X ≤ x} = P{Z ≤

x−µ } σ

= Φ( x−µ ) σ

      

E[X] = µ V[X] = σ 2

J. Virtamo

38.3143 Queueing Theory / Continuous Distributions

19

Multivariate Gaussian (normal) distribution Let X1, . . . , Xn be a set of Gaussian (i.e. normally distributed) random variables with expectations µ1, . . . , µn and covariance matrix 

Γ=

2  σ11     



2 σ1n  

··· ... . . . ... 2 2 σn1 · · · σnn

   

σij2 = Cov[Xi , Xj ]

(σii2 = V[Xi])

Denote X = (X1 , . . . , Xn)T . The probability density function of the random vector X is 1 (x−µ)TΓ−1 (x−µ) 1 − e 2 f (x) = r n (2π) |Γ|

where |Γ| is the determinant of the covariance matrix. −1/2 By a change of variables one sees easily that the pdf of the random vector Z = Γ (X − µ) √ √ 2 2 is (2π)−n/2 exp(− 12 zT z) = 2πe−z1 /2 · · · 2πe−zn/2 .

Thus the components of the vector Z are independent N(0,1) distributed random variables. Conversely, X = µ + Γ1/2 Z by means of which one can generate values for X in simulations.