Normal Random Variables and Probability

Normal Random Variables and Probability An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 J. Robert Buchanan Normal R...
Author: Morris Wright
0 downloads 4 Views 2MB Size
Normal Random Variables and Probability An Undergraduate Introduction to Financial Mathematics

J. Robert Buchanan

2010

J. Robert Buchanan

Normal Random Variables and Probability

Discrete vs. Continuous Random Variables

Think about the probability of selecting X from the interval [0, 1] when X ∈ {0, 1}

J. Robert Buchanan

Normal Random Variables and Probability

Discrete vs. Continuous Random Variables

Think about the probability of selecting X from the interval [0, 1] when X ∈ {0, 1}

X ∈ {k/10 : k = 0, 1, . . . , 10}

J. Robert Buchanan

Normal Random Variables and Probability

Discrete vs. Continuous Random Variables

Think about the probability of selecting X from the interval [0, 1] when X ∈ {0, 1}

X ∈ {k/10 : k = 0, 1, . . . , 10} X ∈ {k/n : k = 0, 1, . . . , n}

J. Robert Buchanan

Normal Random Variables and Probability

Discrete vs. Continuous Random Variables

Think about the probability of selecting X from the interval [0, 1] when X ∈ {0, 1}

X ∈ {k/10 : k = 0, 1, . . . , 10} X ∈ {k/n : k = 0, 1, . . . , n}

Question: what happens to the last probability as n → ∞?

J. Robert Buchanan

Normal Random Variables and Probability

Continuous Random Variables

Definition A random variable X has a continuous distribution (or probability distribution function or probability density function) if there exists a non-negative function f : R → R such that for an interval [a, b] the P (a ≤ X ≤ b) =

Z

b

f (x) dx. a

The function f must, in addition to satisfying f (x) ≥ 0, have the following property, Z ∞

f (x) dx = 1.

−∞

J. Robert Buchanan

Normal Random Variables and Probability

Area Under the PDF fHxL

x a

J. Robert Buchanan

b

Normal Random Variables and Probability

Uniformly Distributed Continuous Random Variables

Definition A continuous random variable X is uniformly distributed in the interval [a, b] (with b > a) if the probability that X belongs to any subinterval of [a, b] is equal to the length of the subinterval divided by b − a.

J. Robert Buchanan

Normal Random Variables and Probability

Uniformly Distributed Continuous Random Variables

Definition A continuous random variable X is uniformly distributed in the interval [a, b] (with b > a) if the probability that X belongs to any subinterval of [a, b] is equal to the length of the subinterval divided by b − a. Question: Assuming the PDF vanishes outside of [a, b] and is constant on [a, b], what is the PDF?

J. Robert Buchanan

Normal Random Variables and Probability

Example

Example Random variable X is continuously uniformly randomly distributed in the interval [−5, 5]. Find the probability that −1 ≤ X ≤ 2.

J. Robert Buchanan

Normal Random Variables and Probability

Example

Example Random variable X is continuously uniformly randomly distributed in the interval [−5, 5]. Find the probability that −1 ≤ X ≤ 2. P (−1 ≤ X ≤ 2) =

J. Robert Buchanan

3 2 − (−1) = 5 − (−5) 10

Normal Random Variables and Probability

Expected Value

Definition The expected value or mean of a continuous random variable X with probability density function f (x) is Z ∞ x f (x) dx. E [X ] = −∞

J. Robert Buchanan

Normal Random Variables and Probability

Example Example Find the expected value of X if X is a continuously uniformly distributed random variable on the interval [−10, 80].

J. Robert Buchanan

Normal Random Variables and Probability

Example Example Find the expected value of X if X is a continuously uniformly distributed random variable on the interval [−10, 80].

E [X ] =

Z



x f (x) dx

−∞ Z 80

x dx −10 90 80 x 2 = 180 −10 6400 100 = − 180 180 = 35

=

J. Robert Buchanan

Normal Random Variables and Probability

Expected Value of a Function

Definition The expected value of a function g of a continuously distributed random variable X which has probability distribution function f is defined as Z ∞ g(x)f (x) dx, E [g(X )] = −∞

provided the improper integral converges absolutely, i.e., E [g(X )] is defined if and only if Z ∞ |g(x)|f (x) dx < ∞. −∞

J. Robert Buchanan

Normal Random Variables and Probability

Example Example Find the expected value of X 2 if X is continuously distributed on [0, ∞) with probability density function f (x) = e−x .

J. Robert Buchanan

Normal Random Variables and Probability

Example Example Find the expected value of X 2 if X is continuously distributed on [0, ∞) with probability density function f (x) = e−x . h

E X

2

i

= = = =

Z



x 2 e−x dx 0

lim

Z

M

M→∞ 0

lim

M→∞

lim

M→∞

= 2

h

h

x 2 e−x dx

i M −(x 2 + 2x + 2)e−x 0

2

2 − (M + 2M + 2)e

J. Robert Buchanan

−M

i

Normal Random Variables and Probability

Joint and Marginal Distributions Definition A joint probability distribution for a pair of random variables, X and Y , is a non-negative function f (x, y) for which Z ∞Z ∞ f (x, y) dx dy = 1. −∞

−∞

J. Robert Buchanan

Normal Random Variables and Probability

Joint and Marginal Distributions Definition A joint probability distribution for a pair of random variables, X and Y , is a non-negative function f (x, y) for which Z ∞Z ∞ f (x, y) dx dy = 1. −∞

−∞

Definition If X and Y are continuous random variables with joint distribution f (x, y) then the marginal distribution for X is defined as the function Z ∞ f (x, y) dy. fX (x) = −∞

J. Robert Buchanan

Normal Random Variables and Probability

Joint and Marginal Distributions Definition A joint probability distribution for a pair of random variables, X and Y , is a non-negative function f (x, y) for which Z ∞Z ∞ f (x, y) dx dy = 1. −∞

−∞

Definition If X and Y are continuous random variables with joint distribution f (x, y) then the marginal distribution for X is defined as the function Z ∞ f (x, y) dy. fX (x) = −∞

Remark: a similar definition may be stated for the marginal distribution for Y . J. Robert Buchanan

Normal Random Variables and Probability

Independence of Jointly Distributed RVs

Definition Two continuous random variables are independent and only if the joint probability distribution function factors into the product of the marginal distributions of X and Y . In other words X and Y are independent if and only if f (x, y) = fX (x)fY (y) for all real numbers x and y.

J. Robert Buchanan

Normal Random Variables and Probability

Example (1 of 2)

Example Consider the jointly distributed random variables (X , Y ) ∈ [0, ∞) × [−2, 2] whose distribution is the function f (x, y) = 4e1x . Find the mean of X + Y .

J. Robert Buchanan

Normal Random Variables and Probability

Example (2 of 2)

E [X + Y ] = = = =

Z

0

Z

∞Z 2

(x + y)

−2



Z0 ∞

Z0 ∞



1 4ex



dy dx

Z 1 −x 2 e (x + y) dy dx 4 −2 1 −x e (4x) dx 4

xe−x dx

0

= =

lim

Z

M→∞ 0

M

xe−x dx

lim (1 − Me−M − e−M )

M→∞

= 1 J. Robert Buchanan

Normal Random Variables and Probability

Properties of the Expected Values

Theorem If X1 , X2 , . . . , Xk are continuous random variables with joint probability distribution f (x1 , x2 , . . . , xk ) then E [X1 + X2 + · · · + Xk ] = E [X1 ] + E [X2 ] + · · · + E [Xk ].

J. Robert Buchanan

Normal Random Variables and Probability

Properties of the Expected Values

Theorem If X1 , X2 , . . . , Xk are continuous random variables with joint probability distribution f (x1 , x2 , . . . , xk ) then E [X1 + X2 + · · · + Xk ] = E [X1 ] + E [X2 ] + · · · + E [Xk ]. Theorem Let X1 , X2 , . . . , Xk be pairwise independent random variables with joint distribution f (x1 , x2 , . . . , xk ), then E [X1 X2 · · · Xk ] = E [X1 ] E [X2 ] · · · E [Xk ] .

J. Robert Buchanan

Normal Random Variables and Probability

Variance and Standard Deviation Definition If X is a continuously distributed random variable with probability density function f (x), the variance of X is defined as i Z ∞ h (x − µ)2 f (x) dx, Var (X ) = E (X − µ)2 = −∞

where µp = E [X ]. The standard deviation of X is σ(X ) = Var (X ).

J. Robert Buchanan

Normal Random Variables and Probability

Variance and Standard Deviation Definition If X is a continuously distributed random variable with probability density function f (x), the variance of X is defined as i Z ∞ h (x − µ)2 f (x) dx, Var (X ) = E (X − µ)2 = −∞

where µp = E [X ]. The standard deviation of X is σ(X ) = Var (X ). Theorem Let X be a random variable  with  probability distribution f and mean µ, then Var (X ) = E X 2 − µ2 . J. Robert Buchanan

Normal Random Variables and Probability

Example Example Suppose X is continuously distributed on [0, ∞) with probability density function f (x) = e−x . Find Var (X ).

J. Robert Buchanan

Normal Random Variables and Probability

Example Example Suppose X is continuously distributed on [0, ∞) with probability density function f (x) = e−x . Find Var (X ).

h i Var (X ) = E X 2 − (E [X ])2 Z Z ∞ 2 −x x e dx − = 0

= 2−

Z



0

= 2 − (1)2

xe−x dx



0 2

xe

−x

dx

2

= 1

J. Robert Buchanan

Normal Random Variables and Probability

Properties of Variance

Theorem Let X be a continuous random variable with probability distribution f (x) and let a, b ∈ R, then Var (aX + b) = a2 Var (X ) .

J. Robert Buchanan

Normal Random Variables and Probability

Properties of Variance

Theorem Let X be a continuous random variable with probability distribution f (x) and let a, b ∈ R, then Var (aX + b) = a2 Var (X ) . Theorem Let X1 , X2 , . . . , Xk be pairwise independent continuous random variables with joint probability distribution f (x1 , x2 , . . . , xk ), then Var (X1 + X2 + · · · + Xk ) = Var (X1 ) + Var (X2 ) + · · · + Var (Xk ) .

J. Robert Buchanan

Normal Random Variables and Probability

Normal Random Variable

Assumption: any characteristic of an object subject to a large number of independently acting forces typically takes on a normal distribution.

J. Robert Buchanan

Normal Random Variables and Probability

Normal Random Variable

Assumption: any characteristic of an object subject to a large number of independently acting forces typically takes on a normal distribution. We will develop the normal probability density function from the probability function for the binomial random variable. P (X = x) =

n! p x (1 − p)n−x x!(n − x)!

J. Robert Buchanan

Normal Random Variables and Probability

Overview of Derivation Thought Experiment: Imagine standing at the origin of the number line and at each tick of a clock taking a step to the left or the right. In the long run where will you stand?

J. Robert Buchanan

Normal Random Variables and Probability

Overview of Derivation Thought Experiment: Imagine standing at the origin of the number line and at each tick of a clock taking a step to the left or the right. In the long run where will you stand? Assumptions: 1

n steps/ticks,

2

random walk takes place during time interval [0, t], which implies a “tick” lasts ∆t,

3

on each tick move a distance ∆x > 0,

4

n(∆x)2 = 2kt,

5

probability of moving left/right is 1/2,

6

all steps are independent.

J. Robert Buchanan

Normal Random Variables and Probability

Take a Few Steps Suppose r out of n steps (0 ≤ r ≤ n) have been to the right. Question: Where are you?

J. Robert Buchanan

Normal Random Variables and Probability

Take a Few Steps Suppose r out of n steps (0 ≤ r ≤ n) have been to the right. Question: Where are you? (r − (n − r ))∆x = (2r − n)∆x = m∆x Question: What is the probability of standing there?

J. Robert Buchanan

Normal Random Variables and Probability

Take a Few Steps Suppose r out of n steps (0 ≤ r ≤ n) have been to the right. Question: Where are you? (r − (n − r ))∆x = (2r − n)∆x = m∆x Question: What is the probability of standing there? P (X = m∆x) = P (X = (2r − n)∆x)    r  n−r n 1 1 = r 2 2  n 1 n! = r !(n − r )! 2 n n! 12  1  = 1 2 (n + m) ! 2 (n − m) ! J. Robert Buchanan

Normal Random Variables and Probability

Bernoulli Steps

Each step is a Bernoulli experiment with outcomes ∆x and −∆x. Questions: What is the expected value of a single step?

What is the variance in the outcomes?

J. Robert Buchanan

Normal Random Variables and Probability

Bernoulli Steps

Each step is a Bernoulli experiment with outcomes ∆x and −∆x. Questions: What is the expected value of a single step? E [X ] = 0 What is the variance in the outcomes?

J. Robert Buchanan

Normal Random Variables and Probability

Bernoulli Steps

Each step is a Bernoulli experiment with outcomes ∆x and −∆x. Questions: What is the expected value of a single step? E [X ] = 0 What is the variance in the outcomes? Var (X ) = (∆x)2

J. Robert Buchanan

Normal Random Variables and Probability

The Sum of Bernoulli Steps

Questions: after n steps, What is the expected value of where you stand?

What is the variance in final position?

J. Robert Buchanan

Normal Random Variables and Probability

The Sum of Bernoulli Steps

Questions: after n steps, What is the expected value of where you stand? " n # X X = nE [X ] = 0 E i=1

What is the variance in final position?

J. Robert Buchanan

Normal Random Variables and Probability

The Sum of Bernoulli Steps

Questions: after n steps, What is the expected value of where you stand? " n # X X = nE [X ] = 0 E i=1

What is the variance in final position? ! n X X = nVar (X ) = n(∆x)2 Var i=1

J. Robert Buchanan

Normal Random Variables and Probability

Stirling’s Formula

n! ≈



2πe−n nn+1/2

Replace all the factorials with Stirling’s Formula.

J. Robert Buchanan

Normal Random Variables and Probability

Stirling’s Formula

n! ≈



2πe−n nn+1/2

Replace all the factorials with Stirling’s Formula. √

n 2πe−n nn+1/2 12 √ (n+m+1)/2 √ (n−m+ 2πe−(n+m)/2 12 (n + m) 2πe−(n−m)/2 12 (n − m)  −(n+1)/2 m −m/2  m m/2 2  m2 1+ 1− = √ 1− 2 n n n 2nπ

J. Robert Buchanan

Normal Random Variables and Probability

Further Simplification

Since m = x/∆x and n = t/∆t,

 −(n+1)/2  m −m/2  m m/2 m2 √ 1+ 1− 1− 2 n n n 2nπ √   x   x  !− 1+t/∆t  2 x∆t − 2∆x x∆t 2 x∆t 2∆x 2 ∆t 1+ 1− 1− = √ t∆x t∆x t∆x 2πt    x   x "  #− kt 2 − 12 x ∆x 2 (∆x ) ∆x x ∆x − 2∆x x ∆x 2∆x 1− = √ 1+ 1− , 2kt 2kt 2kt kπt 2

since (∆x)2 = 2k∆t.

J. Robert Buchanan

Normal Random Variables and Probability

Passing to the Limit As we take the limit with ∆x → 0, the probability of standing at exactly one, specific location becomes 0. Instead we must change our thinking and ask for P ((m − 1)∆x < X < (m + 1)∆x) ≈ 2(∆x)f (x, t).

J. Robert Buchanan

Normal Random Variables and Probability

Passing to the Limit As we take the limit with ∆x → 0, the probability of standing at exactly one, specific location becomes 0. Instead we must change our thinking and ask for P ((m − 1)∆x < X < (m + 1)∆x) ≈ 2(∆x)f (x, t). f (x, t) = = =

  −x   x "  #− k  1 x ∆x 2 (∆ x ∆x 2∆x x ∆x 2∆x √ 1− 1− lim 1 + 2kt 2kt 2kt 2 kπt ∆x→0   −kt 2 1  x − x2  − x  x2 − x √ e 2kt e 4k 2 t 2 e 2kt 2 kπt x2 1 √ e− 4kt 2 kπt J. Robert Buchanan

Normal Random Variables and Probability

Is f (x, t) a PDF?

Suppose

Z

∞ −∞

S2 = = = =

x2 1 √ e− 4kt dx = S, then 2 kπt Z ∞ Z ∞ y2 x2 1 1 √ √ e− 4kt dx e− 4kt dy −∞ 2 kπt −∞ 2 kπt Z ∞Z ∞ 1 2 2 e−(x +y )/4kt dx dy 4kπt −∞ −∞ Z 2π Z ∞ 2 1 re−r /4kt dr d θ 4kπt 0 0 1

J. Robert Buchanan

Normal Random Variables and Probability

Surface Plot The graph of the PDF resembles:

z

t

x

J. Robert Buchanan

Normal Random Variables and Probability

The Bell Curve For a fixed value of t, the graph of the PDF resembles: y

x

J. Robert Buchanan

Normal Random Variables and Probability

Expected Value and Variance If X is a continuously distributed random variable with PDF: x2 1 f (x, t) = √ e− 4kt 2 kπt

then

J. Robert Buchanan

Normal Random Variables and Probability

Expected Value and Variance If X is a continuously distributed random variable with PDF: x2 1 f (x, t) = √ e− 4kt 2 kπt

then E [X ] =

Z



−∞

x2 x √ e− 4kt dx = 0 2 kπt

and

J. Robert Buchanan

Normal Random Variables and Probability

Expected Value and Variance If X is a continuously distributed random variable with PDF: x2 1 f (x, t) = √ e− 4kt 2 kπt

then E [X ] =

Z



−∞

and Var (X ) =

Z



−∞

x2 x √ e− 4kt dx = 0 2 kπt

x2 x2 √ e− 4kt dx − (E [X ])2 = 2kt 2 kπt

σ2

and thus 2kt = and we express the PDF for a normally distributed random variable with mean µ and variance σ 2 as 2 1 − (x −µ) f (x) = √ e 2σ2 . σ 2π

J. Robert Buchanan

Normal Random Variables and Probability

Standard Normal Distribution

x2 1 When µ = 0 and σ = 1 the PDF f (x) = √ e− 2 is called the 2π standard normal distribution.

J. Robert Buchanan

Normal Random Variables and Probability

Standard Normal Distribution

x2 1 When µ = 0 and σ = 1 the PDF f (x) = √ e− 2 is called the 2π standard normal distribution.

The cumulative distribution function φ(x) is defined as Z x t2 1 √ e− 2 dt. φ(x) = P (X < x) = 2π −∞

J. Robert Buchanan

Normal Random Variables and Probability

Change of Variable

Theorem If X is a normally distributed random variable with expected value µ and variance σ 2 , then Z = (X − µ)/σ is normally distributed with an expected value of zero and a variance of one.

J. Robert Buchanan

Normal Random Variables and Probability

Central Limit Theorem (1 of 2)

Suppose the random variables X1 , X2 , . . . , Xn 1

2

are pairwise independent but not necessarily identically distributed, have means µ1 , µ2 , . . . , µn and variances σ12 , σ22 , . . . , σn2 ,

and we define a new random variable Yn as Pn i=1 (Xi − µi ) . Yn = q Pn 2 σ i=1 i

J. Robert Buchanan

Normal Random Variables and Probability

Central Limit Theorem (1 of 2)

Suppose the random variables X1 , X2 , . . . , Xn 1

2

are pairwise independent but not necessarily identically distributed, have means µ1 , µ2 , . . . , µn and variances σ12 , σ22 , . . . , σn2 ,

and we define a new random variable Yn as Pn i=1 (Xi − µi ) . Yn = q Pn 2 σ i=1 i

A Central Limit Theorem due to Liapounov implies that Yn has the standard normal distribution.

J. Robert Buchanan

Normal Random Variables and Probability

Central Limit Theorem (2 of 2) Theorem Suppose that the infinite collection {Xi }∞ i=1 of random variables are  pairwiseindependent and that for each i ∈ N we have E |Xi − µi |3 < ∞. If in addition, lim

n→∞

Pn

then for any x ∈ R



3 i=1 E |Xi − µi |  Pn 2 3/2 i=1 σi



=0

lim P (Yn ≤ x) = φ(x)

n→∞

where random variable Yn is defined as above.

J. Robert Buchanan

Normal Random Variables and Probability

Example (1 of 2)

Example Suppose the annual snowfall in Millersville, PA is 14.6 inches with a standard deviation of 3.2 inches and is normally distributed. Snowfall amounts in different years are independent. What is the probability that the sum of the snowfall amounts in the next two years will exceed 30 inches?

J. Robert Buchanan

Normal Random Variables and Probability

Example (2 of 2)

Solution: If X represents the random variable standing for the snowfall received in Millersville, PA for one year then X + X is the random variable representing the snowfall of two years. The random variable X + X has mean µ = 2(14.6) = 29.2 inches and variance σ 2 = (3.2)2 + (3.2)2 . ! 30 − 29.2 P (X + X > 30) = P Z > p (3.2)2 + (3.2)2 = 1 − P (Z ≤ 0.176777) = 1 − φ(0.176777) = 0.429842

J. Robert Buchanan

Normal Random Variables and Probability

Lognormal Random Variables

Definition A random variable X is a lognormal random variable with parameters µ and σ if ln X is a normally distributed random variable with mean µ and variance σ 2 .

J. Robert Buchanan

Normal Random Variables and Probability

Lognormal Random Variables

Definition A random variable X is a lognormal random variable with parameters µ and σ if ln X is a normally distributed random variable with mean µ and variance σ 2 . Remarks: The parameters µ is sometimes called the drift. The parameter σ is sometimes called the volatility.

J. Robert Buchanan

Normal Random Variables and Probability

Lognormal PDF (1 of 2)

Suppose X is lognormal, then Y = ln X is normal and P (X < x) = P (Y < ln x) Z ln x 1 2 2 = √ e−(t−µ) /2σ dt 2π −∞ If we let u = et and du = et dt, then Z x 1 1 −(ln u−µ)2 /2σ2 P (Y < ln x) = √ e du 2π −∞ u

J. Robert Buchanan

Normal Random Variables and Probability

Lognormal PDF (2 of 2) f (x) =

1 2 2 √ e−(ln x−µ) /2σ (σ 2π)x

y

x J. Robert Buchanan

Normal Random Variables and Probability

Mean and Variance of a Lognormal RV

Lemma If X is a lognormal random variable with parameters µ and σ then E [X ] = eµ+σ

2 /2

Var (X ) = e2µ+σ

J. Robert Buchanan

2

  2 eσ − 1

Normal Random Variables and Probability

Derivation of Mean of a Lognormal RV

 Z ∞  1 1 −(ln x−µ)2 /2σ2 √ e dx E [X ] = x x σ 2π 0 Z ∞ 1 2 2 √ = et e−(t−µ) /2σ dt σ 2π −∞ Z ∞ 1 2 2 2 µ+σ2 /2 √ e−(t−(µ+σ )) /2σ dt = e σ 2π −∞ = eµ+σ

2 /2

J. Robert Buchanan

Normal Random Variables and Probability

Derivation of Variance of a Lognormal RV

h i Var (X ) = E X 2 − (E [X ])2  Z ∞  2  1 2 2 1 −(ln x−µ)2 /2σ2 √ x = e dx − eµ+σ /2 x σ 2π 0 Z ∞ 1 2 2 √ e2t e−(t−µ) /2 dt − e2µ+σ = σ 2π −∞ Z ∞ 1 2 2 2(µ+σ2 ) √ = e e−(t−(µ+2σ)) /2 dt − e2µ+σ σ 2π −∞   2 2 = e2µ+σ eσ − 1

J. Robert Buchanan

Normal Random Variables and Probability

Lognormal RVs and Security Prices

Observation: Let S(0) denote the price of a security at some starting time arbitrarily chosen to be t = 0. For n ≥ 1, let S(n) denote the price of the security on day n. The random variable X (n) = S(n)/S(n − 1) for n ≥ 1 is lognormally distributed, i.e., ln X (n) = ln S(n) − ln S(n − 1) is normally distributed.

J. Robert Buchanan

Normal Random Variables and Probability

Closing Prices of Sony (SNE) Stock

Closing prices of Sony Corporation stock:

55

50

45

40

Oct

Jan

J. Robert Buchanan

Apr

Jul

Normal Random Variables and Probability

Lognormal Behavior of Sony (SNE) Stock Lognormal behavior of closing prices: 30 25 20 15 10 5

-0.04

-0.02

0

0.02

0.04

0.06

µ = 0.000416686 σ = 0.0160606 J. Robert Buchanan

Normal Random Variables and Probability

Example (1 of 2) Example What is the probability that the closing price of Sony Corporation stock will be higher today than yesterday?

J. Robert Buchanan

Normal Random Variables and Probability

Example (1 of 2) Example What is the probability that the closing price of Sony Corporation stock will be higher today than yesterday? 







     S(n)  S(n)   = P P > 1 > ln 1 ln    S(n − 1)  S(n − 1)    | {z }  {z } | lognormal normal = P (X > 0)   0 − 0.000416686 = P Z > 0.0160606 = 1 − P (Z ≤ −0.0259446) = 1 − φ(−0.0259446)

= 0.510349 J. Robert Buchanan

Normal Random Variables and Probability

Example (2 of 2) Example What is the probability that tomorrow’s closing price will be higher than yesterday’s closing price?

J. Robert Buchanan

Normal Random Variables and Probability

Example (2 of 2) Example What is the probability that tomorrow’s closing price will be higher than yesterday’s closing price?

P



S(n + 1) >1 S(n − 1)



= = = = =



 S(n + 1) S(n) P >1 S(n) S(n − 1)   S(n) S(n + 1) + ln >0 P ln S(n) S(n − 1) P (X + X > 0)   0 − 2(0.000416686) P Z >√ 0.01606062 + 0.01606062 1 − P (Z ≤ −0.0366912)

= 1 − φ(−0.0366912)

= 0.514634 J. Robert Buchanan

Normal Random Variables and Probability

Properties of Expected Value and Variance

If an item is worth K but can only be sold for X , a rational investor would sell only if X ≥ K . The payoff of the sale can be expressed as  X − K if X ≥ K , (X − K )+ = 0 if X < K .

J. Robert Buchanan

Normal Random Variables and Probability

Payoff When X is Normal Corollary If X is normal random variable with mean µ and variance σ 2 and K is a constant, then     µ−K σ −(µ−K )2 /2σ2 + + (µ − K )φ E (X − K ) = √ e , σ 2π  Var (X − K )+   µ − 2K  (µ − 2K )σ  2 2 2 2 √ e−(µ−2K ) /2σ + = (µ − 2K ) + σ φ σ 2π   2 σ −(µ−K )2 /2σ2 µ−K . − √ e + (µ − K )φ σ 2π

J. Robert Buchanan

Normal Random Variables and Probability

Payoff When X is Lognormal Corollary If X is a lognormally distributed random variable with parameters µ and σ 2 and K > 0 is a constant then       µ − ln K µ − ln K + µ+σ2 /2 E (X − K ) = e φ + σ − Kφ , σ σ Var (X − K )+



2

= e2(µ+σ ) φ(w + 2σ) + K 2 φ(w )

where w = (µ − ln K )/σ.

2

− 2Keµ+σ /2 φ(w + σ)  2 2 − eµ+σ /2 φ(w + σ) − K φ(w )

J. Robert Buchanan

Normal Random Variables and Probability

Suggest Documents