Probability in the Engineering and Informational Sciences, 18, 2004, 473–484+ Printed in the U+S+A+

COMPOUND RANDOM VARIABLES EROL PEKÖZ School of Management Boston University Boston, MA 02215 E-mail: [email protected]

SHELDON M. ROSS Epstein Department of Industrial and Systems Engineering University of Southern California Los Angeles, CA 90089 E-mail: [email protected]

We give a probabilistic proof of an identity concerning the expectation of an arbitrary function of a compound random variable and then use this identity to obtain recursive formulas for the probability mass function of compound random variables when the compounding distribution is Poisson, binomial, negative binomial random, hypergeometric, logarithmic, or negative hypergeometric+ We then show how to use simulation to efficiently estimate both the probability that a positive compound random variable is greater than a specified constant and the expected amount by which it exceeds that constant+

1. INTRODUCTION AND SUMMARY Let X 1 , X 2 , + + + be a sequence of independent and identically distributed ~i+i+d+! positive random variables that are independent of the nonnegative integer-valued ranN X i is called a compound random dom variable N+ The random variable SN 5 ( i51 variable+ In Section 2, we give a simple probabilistic proof of an identity concerning the expected value of a function of a compound random variable; when the X i are positive integer-valued, an identity concerning the probability mass function of SN is obtained as a corollary+ In Section 3, we use the latter identity to provide new © 2004 Cambridge University Press

0269-9648004 $16+00

473

474

E. Peköz and S. M. Ross

derivations of the recursive formulas for the probability mass function of SN when X 1 is a positive integer-valued random variable, and N has a variety of possible distributions+ For other derivations of the applications of Section 3, the reader should see the references+ Sections 4 and 5 are concerned with finding efficient simulation techniques to estimate p 5 P $S # c%

u 5 E @~S 2 c!1 # ,

and

where c is a specified constant and the X i need not be discrete+ Because E @~S 2 c!1 # 5 E @S 2 c6S . c# ~1 2 p! and E @N #E @X # 2 c 5 E @S 2 c# 5 E @S 2 c6S . c# ~1 2 p! 1 E @S 2 c6S # c# p, it follows that estimating p and u will also give us estimates of E @S 2 c6S . c# and E @c 2 S6S # c# + Although our major interest is when the X i are positive, in Section 5 we show how an effective simulation can be performed when this restriction is removed+ 2. THE COMPOUND IDENTITY Consider the compound random variable N

SN 5

(X + i

i51

Let M be independent of X 1 , X 2 , + + + and such that P $M 5 n% 5

nP $N 5 n% E @N #

,

n $ 0+

The random variable M is called the sized bias version of N+ ~If the interarrival times of a renewal process were distributed according to N, then the average length of a renewal interval containing a fixed point would be distributed according to M+! Theorem 2.1 ~The Compound Identity!: For any function h, E @SN h~SN !# 5 E @N #E @X 1 h~SM !# +

COMPOUND RANDOM VARIABLES

475

Proof:

F(

G

N

E @SN h~SN !# 5 E

X i h~SN !

i51

`

5

N

( ( E @X h~S i

N

!6N 5 n#P $N 5 n%

n50 i51 `

5

n

( ( E @X h~S !#P $N 5 n% i

n

n50 i51 `

5

( nE @X

1

h~Sn !#P $N 5 n%

n50

`

5 E @N #

( E @X

1

h~Sn !#P $M 5 n%

n50

5 E @N #E @X 1 h~SM !#

n

Corollary 2.1: If X 1 is a positive integer-valued random variable with ai 5 P $X 1 5 i %, then P $SN 5 k% 5

k 1 E @N # ( iai P $SM21 5 k 2 i %+ k i51

Proof: For an event A, let I ~A! equal one if A occurs and let it equal zero otherwise+ Then, with h~ x! 5 I ~ x 5 k!, the compound identity yields that P $SN 5 k% 5

1 E @SN I ~SN 5 k!# k

5

1 E @N #E @X 1 I ~SM 5 k!# k

5

1 E @N # ( E @X 1 I ~SM 5 k!6 X 1 5 i #ai k i

5

1 E @N # ( iP $SM 5 k 6 X 1 5 i %ai k i

5

1 E @N # ( iP $SM21 5 k 2 i %ai + k i

3. SPECIAL CASES Suppose that X 1 is a positive integer-valued random variable with ai 5 P $X 1 5 i %+

476

E. Peköz and S. M. Ross

3.1. Poisson Case If P $N 5 n% 5

e 2l ln , n!

n $ 0,

then P $M 2 1 5 n% 5 P $N 5 n%,

n $ 0+

Therefore, the corollary yields the well-known recursion P $SN 5 k% 5

k 1 l ( iai P $SN 5 k 2 i %+ k i51

3.2. Negative Binomial Case For a fixed value of p, we say that N is an NB~r! random variable if P $N 5 n% 5

S

n 1 r 21 n

D

p r ~1 2 p! n,

n $ 0+

Such a random variable can be thought of as being the number of failures that occur before a total of r successes have been amassed when each trial is independently a success with probability p+ If M is the size-biased version of an NB~r! random variable N, then P $M 2 1 5 n% 5

n 11

S D n1r

r~1 2 p!0p n 1 1

p r ~1 2 p! n11 5

S D n1r

that is, M 2 1 is an NB~r 1 1! random variable+ Now, for N an NB~r! random variable, let Pr ~k! 5 P $SN 5 k%+ The corollary now yields the recursion Pr ~k! 5

r~1 2 p! kp

k

( ia P i

i51

For instance, starting with Pr ~0! 5 p r,

r11

~k 2 i !+

n

p r11 ~1 2 p! n ;

COMPOUND RANDOM VARIABLES

477

the recursion yields Pr ~1! 5

r~1 2 p! a1 Pr11 ~0! p

5 rp r ~1 2 p!a1 , Pr ~2! 5 5 Pr ~3! 5

r~1 2 p! @a1 Pr11 ~1! 1 2a2 Pr11 ~0!# 2p r~1 2 p! 2 @a1 ~r 1 1!p r11 ~1 2 p! 1 2a2 p r11 # , 2p r~1 2 p! @a1 Pr11 ~2! 1 2a2 Pr11 ~1! 1 3a3 Pr11 ~0!# , 3p

and so on+ 3.3. Binomial Case If N is a binomial random variable with parameters r and p, then P $M 2 1 5 n% 5 5

S D S D

r n 11 p n11 ~1 2 p! r2n21 rp n 11 r 21 n

p n ~1 2 p! r212n,

0 # n # r 2 1;

that is, M 2 1 is a binomial random variable with parameters r 2 1 and p+ For a fixed p, let Pr ~k! 5 P $SN 5 k%+ The corollary then yields the recursion Pr ~k! 5

rp k

k

( ia P i

r21

~k 2 i !+

i51

3.4. Hypergeometric Case Let N 5 N~w, r! be a hypergeometric random variable having the distribution of the number of white balls chosen when a random sample of r is chosen from a set of w white and b blue balls; that is,

SwnDSr 2b nD + P $N 5 n% 5 Sw 1r bD

478

E. Peköz and S. M. Ross

Then, it is straightforward to check that

Sw 2n 1DSr 2 nb 2 1D ; P $M 2 1 5 n% 5 Sw 1r 2b 12 1D that is, M 2 1 has the same distribution as N with the modification that w becomes w 2 1 and r becomes r 2 1+ Letting Pw, r ~k! 5 P $SN~w, r! 5 k%, then Pw, r ~k! 5

rw k~w 1 b!

k

( ia P i

w21, r21

~k 2 i !+

i51

This yields

S D

b rw rw r 21 a1 Pw21, r21 ~0! 5 a1 , Pw, r ~1! 5 w 1 b 21 w1b w1b r 21

S

and so on+ ~We are using the convention that

D

S D 5 0 if either k , 0 or k . n+! n k

3.5. The Logarithmic Count Distribution Suppose that for 0 , b , 1, P $N 5 n% 5 C

bn , n

n 5 1,2, + + + ,

where C 5 210ln~1 2 b!+ Then, P $M 2 1 5 n% 5 b n ~1 2 b!,

n $ 0;

that is, M 2 1 has the negative binomial distribution of Subsection 3+2 with r 5 1 and p 5 1 2 b+ Thus, the recursion of Subsection 3+2 and the corollary yield the probabilities P $SN 5 k%+ 3.6. The Negative Hypergeometric Distribution Suppose that N has the distribution of the number of blue balls chosen before a total of r white balls have been amassed when balls are randomly removed from an urn containing w white and b blue balls; that is,

SbnDSr 2w 1D w 2 r 1 1 + P $N 5 n% 5 Sn w11r 2b 1D w 1 b 2 n 2 r 1 1

COMPOUND RANDOM VARIABLES

479

Using E @N # 5 rb0~w 1 1!, we obtain

Ssb n2 1DSw 1r 1D ; P $M 2 1 5 n% 5 w1b Sn 1 rD that is, M 2 1 has a hypergeometric distribution, implying that the probabilities P $SM21 5 j % can be obtained from the recursion of Subsection 3+4+ Applying the corollary then gives the probabilities P $SN 5 k%+ 4. ESTIMATING P {S # c} The raw simulation approach to estimate p 5 P $S # c% would first generate the value of N, say N 5 n, then generate the values of X 1 , + + + , X n and use them to determine the value of the raw simulation estimator:

I5

5

N

1

if

(X

i

#c

(1)

i51

0

otherwise+

The average value of I over many such runs would then be the estimator of p+ We can improve upon the preceding by a conditional expectation approach that starts by generating the values of the X i in sequence, stopping when the sum of the generated values exceeds c+ Let M denote the number that are needed; that is,

S

n

D

M 5 min n : ( X i . c + i51

If the generated value of M is m, then we use P $N , m% as the estimate of p from this run+ To see that this results in an estimator having a smaller variance than does the raw simulation estimator I, note that because the X i are positive, I 5 1 m N , M+ Hence, E @I6M # 5 P $N , M6M %+

(2)

Now, P $N , M6M 5 m% 5 P $N , m6M 5 m% 5 P $N , m%, where the final equality used the independence of N and M+ Consequently, if the value of M obtained from the simulation is M 5 m, then the value of E @I6M # obtained is P $N , m%+

480

E. Peköz and S. M. Ross

The preceding conditional expectation estimator can be further improved by using a control variable+ Let µ 5 E @X i # , and define the zero mean random variable M

Y5

( ~X

i

2 µ!+

(3)

i51

Because Y and the conditional expectation estimator P $N , M6M % are ~strongly! negatively correlated, Y should make an effective control variable+ 4.1. Improving the Conditional Expectation Estimator Let M be defined as earlier and write P $S # c% 5 ( P $M . j %P $N 5 j %+ j

The conditional expectation estimator is obtained from the preceding by generating M and using I $M . j % as the estimator of P $M . j %+ We now show how to obtain a more efficient simulation estimator of P $M . j %+ Let F denote the distribution function of X i and write P $M . j % 5 P $M . j 6 X 1 # c%F~c!+ If we now simulate X 1 conditional on the event that it is less than or equal to c, then for this value of X 1 , the estimator P $M . j 6 X 1 %F~c! is an unbiased estimator of P $M . j % having a smaller variance than I $M . j %+ Let x 1 # c be the generated value+ For j . 1, we have P $M . j 6 X 1 5 x 1 %F~c! 5 P $M . j 6 X 1 5 x 1 , X 2 # c 2 x 1 %F~c 2 x 1 !F~c!+ Hence, generating X 2 conditional on the event that X 2 # c 2 x 1 gives, when this generated value is x 2 , the estimate P $M . j 6 X 1 5 x 1 , X 2 5 x 2 %F~c 2 x 1 !F~c!+ By continuing in this manner it follows that we can obtain, for any desired value n, estimates of P $M . j %, j 5 1, + + + , n+ We can then obtain estimators of the probabilities P $M . j %, j . n, by switching to an ordinary simulation+ With ej denoting the estimator of P $M . j %, we obtain their values as follows+ 1+ 2+ 3+ 4+ 5+

e0 5 1, s 5 0+ I 5 1+ eI 5 F~c 2 s!eI21 + Generate X conditional on X # c 2 s+ Let its value be X 5 x+ s r s 1 x, I r I 1 1+

COMPOUND RANDOM VARIABLES

481

6+ If I # n, go to 3+ 7+ Generate X 1 , + + + until their sum exceeds c 2 s+ Let R denote the number needed; that is, R 5 min$k : X 1 1 {{{ 1 X k . c 2 s%+ 8+ en1k 5 en I $R . k%, k $ 1+ The estimator of P $S # c% from this run is EST 5 ( ej P $N 5 j %

(4)

j

and its average over many runs is the overall estimate+ 4.2. A Simulation Experiment In this subsection, we give the numerical results of a simulation study done to evaluate the performance of the techniques 1– 4+ We let X i be independent and identically distributed ~i+i+d+! uniform ~0,1! random variables and let N be Poisson, having mean 10+ Table 1 summarizes the standard deviations of the estimators for different values of c+ Ten thousand replications were done for each value of c to estimate N X i # c!+ Technique 1 is the raw simulation method; technique 2 is the conP~ ( i51 ditional expectation method; technique 3 is the conditional expectation method along with the control variable ~3!; technique 4 uses the estimator ~4!+ The raw estimator ~technique 1!, as expected, performs poorly and the other estimators perform much better+ Next, we let X i be i+i+d+ exponential random variables with mean 1, and, again, let N be Poisson, having mean 10+ Table 2 summarizes the standard deviations of the estimators for different values of c+ Ten thousand replications were done for N X i # c!+ each value of c to estimate P~ ( i51

Table 1. Mean and Standard Deviations of the Estimators for Different Values of c c 5 7 10 15

Mean SD Mean SD Mean SD Mean SD

Technique 1

Technique 2

Technique 3

Technique 4

0+5293 0+4991657 0+8575 0+3495797 0+9924 0+08685041 0+99999 0+01

0+6333901 0+1828919 0+9096527 0+08442904 0+9960367 0+007884455 0+9999976 0+00001479114

0+6332048 0+06064626 0+909443 0+04381291 0+9960445 0+006071804 0+9999977 0+00001409331

0+6332126 0+1654034 0+9094954 0+07792921 0+9960576 0+007213068 0+9999977 0+00001230383

482

E. Peköz and S. M. Ross

Table 2. Mean and Standard Deviations of the Estimators for Different Values of c c 15

Mean SD Mean SD Mean SD Mean SD

20 25 30

Technique 1

Technique 2

Technique 4

0+8638 0+343018 0+9739 0+1594407 0+9976 0+04893342 0+9996 0+019997

0+9052721 0+1489983 0+983253 0+05296109 0+9977583 0+01600277 0+999749 0+003714619

0+9054672 0+1284144 0+9832735 0+04602155 0+9977649 0+01322325 0+9997479 0+003133148

Thus, based on this small experiment, it appears that the reduction in variance effected by technique 4 over technique 2 is not worth the additional time that it takes to do a simulation run+ Moreover technique 3, which does not require much more additional time than either technique 1 or technique 2, usually gives an even smaller variance than technique 4+ 5. ESTIMATING u 5 E [(S − c)1] j X i and note that Start by letting Sj 5 ( i51

u5E

F ( ~S 2 c! P $N 5 j %G + j

1

j

To estimate u, follow the procedure of ~2! and generate the sequence X 1 , + + + , stopping at M 5 min~ j : Sj . c!+ Let A 5 SM 2 c and use the estimator E

F ( ~S 2 c! P $N 5 j %6M, AG 5 ( ~A 1 ~ j 2 M !E @X # !P $N 5 j % j

j

1

j$M

5 ~A 2 ME @X # !

S

( P $N 5 j %

j$M

1 E @X # E @N # 2

( jP $N 5 j %

j,M

D

;

COMPOUND RANDOM VARIABLES

483

that is, if the generated values of M and A are m and a, then the estimate of u from that run is

S

~a 2 mE @X # !P $N $ m% 1 E @X # E @N # 2

( jP $N 5 j %

j,m

D

+

6. WHEN THE X i ARE UNCONSTRAINED IN SIGN When the X i are not required to be positive, our previous methods no longer apply+ We now present an approach in the general case+ To estimate p, note that for a specified integer r, r

P $S # c% 5 ( P $Sj # c%P $N 5 j % 1 P $S # c6N . r%P $N . r%+ j50

Our approach is to choose a value r and generate the value of N conditional on it exceeding r; if this generated value is g, then simulate the values of S1 , + + + , Sr and Sg + The estimate of p from this run is r

p[ 5

( I ~S # c!P $N 5 j % 1 I ~S j

g

# c!P $N . r%+

j50

The larger the value of r chosen, the smaller the variance of this estimator+ ~When r 5 0, it reduces to the raw simulation estimator+! Similarly, we can estimate u by using r

u5

( E @~S 2 c! j

1

#P $N 5 j % 1 E @~Sj 2 c!1 6N . r#P $N . r%+

j50

Hence, using the same data generated to estimate p, the estimate of u is r

uZ 5

( ~S 2 c! j

1

P $N 5 j % 1 ~Sg 2 c!1P $N . r%+

j50

Acknowledgment This research was supported by the National Science Foundation grant ECS-0224779 with the University of California+

References 1+ Chan, B+ ~1982!+ Recursive formulas for discrete distributions+ Insurance: Mathematics and Economics 1~4!: 241–243+ 2+ Panjer, H+H+ ~1981!+ Recursive evaluation of a family of compound distributions+ ASTIN Bulletin 12: 22–26+ 3+ Panjer, H+H+ & Willmot, G+E+ ~1982!+ Recursions for compound distributions+ ASTIN Bulletin 13: 1–11+ 4+ Ross, S+ ~2002!+ SIMUL ATION, 3rd ed+, San Diego, CA: Academic Press+ 5+ Schroter, K+J+ ~1990!+ On a family of counting distributions and recursions for related compound distributions+ Scandinavian Actuarial Journal 304: 161–175+

484

E. Peköz and S. M. Ross

6+ Sundt, B+ ~1992!+ On some extensions of Panjer’s class of counting distributions+ ASTIN Bulletin 22: 61–80+ 7+ Sundt, B+ & Jewell, W+S+ ~1981!+ Further results on a recursive evaluation of compound distributions+ ASTIN Bulletin 12: 27–39+ 8+ Willmot, G+E+ ~1993!+ On recursive evaluation of mixed Poisson probabilities and related quantities+ Scandinavian Actuarial Journal 2: 114–133+ 9+ Willmot, G+E+ & Panjer, H+H+ ~1987!+ Difference equation approaches in evaluation of compound distributions+ Insurance: Mathematics and Economics 6: 43–56+