Point Estimation. General Concepts of Point Estimation. Chapter UCLA STAT 110 A Applied Probability & Statistics for Engineers

UCLA STAT 110 A Applied Probability & Statistics for Engineers zInstructor: Chapter 6 Ivo Dinov, Asst. Prof. In Statistics and Neurology zTeaching ...
Author: Rosamund Hudson
0 downloads 2 Views 382KB Size
UCLA STAT 110 A Applied Probability & Statistics for Engineers zInstructor:

Chapter 6

Ivo Dinov,

Asst. Prof. In Statistics and Neurology zTeaching

Assistant:

Point Estimation

Neda Farzinnia, UCLA Statistics

University of California, Los Angeles, Spring 2004

http://www.stat.ucla.edu/~dinov/

Stat 110A, UCLA, Ivo Dinov

Slide 1

Slide 2

Stat 110A, UCLA, Ivo Dinov

Point Estimator

6.1

General Concepts of Point Estimation Slide 3

A point estimator of a parameter θ is a single number that can be regarded as a sensible value for θ . A point estimator can be obtained by selecting a suitable statistic and computing its value from the given sample data.

Slide 4

Stat 110A, UCLA, Ivo Dinov

Unbiased Estimator A point estimator θˆ is said to be an unbiased estimator of θ if E (θˆ) = θ for every possible value of θ . If θˆ is not biased, the difference E (θˆ) − θ is called the bias of θˆ .

Stat 110A, UCLA, Ivo Dinov

The pdf’s of a biased estimator θˆ1 and an unbiased estimator θˆ2 for a parameterθ . pdf of θˆ2 pdf of θˆ1

θ

Bias of θ1

Slide 5

Stat 110A, UCLA, Ivo Dinov

Slide 6

Stat 110A, UCLA, Ivo Dinov

1

The pdf’s of a biased estimator θˆ1 and an unbiased estimator θˆ2 for a parameterθ . pdf of θˆ2 pdf of θˆ1

Unbiased Estimator When X is a binomial rv with parameters n and p, the sample proportion pˆ = X / n is an unbiased estimator of p.

θ Bias of θ1 Slide 7

Slide 8

Stat 110A, UCLA, Ivo Dinov

Principle of Unbiased Estimation When choosing among several different estimators of θ , select one that is unbiased.

Stat 110A, UCLA, Ivo Dinov

Unbiased Estimator Let X1, X2,…,Xn be a random sample from a distribution with mean µ and 2 variance σ . Then the estimator

σˆ = S 2

2

∑( X =

i

−X

n −1

)

2

is an unbiased estimator. Slide 9

Stat 110A, UCLA, Ivo Dinov

Unbiased Estimator If X1, X2,…,Xn is a random sample from a distribution with mean µ , then X is an unbiased estimator of µ . If in addition the distribution is continuous and symmetric, then X% and any trimmed mean are also unbiased estimators of µ .

Slide 11

Stat 110A, UCLA, Ivo Dinov

Slide 10

Stat 110A, UCLA, Ivo Dinov

Principle of Minimum Variance Unbiased Estimation Among all estimators of θ that are unbiased, choose the one that has the minimum variance. The resulting θˆ is called the minimum variance unbiased estimator (MVUE) of θ.

Slide 12

Stat 110A, UCLA, Ivo Dinov

2

Graphs of the pdf’s of two different unbiased estimators

MVUE for a Normal Distribution

pdf of θˆ1 pdf of θˆ2

Let X1, X2,…,Xn be a random sample from a normal distribution with parameters µ and σ . Then the estimator µˆ = X is the MVUE for µ .

θ Slide 13

A biased estimator that is preferable to the MVUE pdf of θˆ1 (biased)

pdf of θˆ

2

(the MVUE)

θ Slide 15

e

, X tr (10)

)

2. If the random sample comes from a Cauchy distribution, then X% is good (the MVUE is not known).

X and X e are quite bad.

Slide 17

Stat 110A, UCLA, Ivo Dinov

Stat 110A, UCLA, Ivo Dinov

The Estimator for µ

( X , X% , X

e

, X tr (10)

)

1. If the random sample comes from a normal distribution, then is the best estimator since it has minimum variance among all unbiased estimators.

Slide 16

Stat 110A, UCLA, Ivo Dinov

The Estimator for µ

( X , X% , X

Slide 14

Stat 110A, UCLA, Ivo Dinov

Stat 110A, UCLA, Ivo Dinov

The Estimator for µ

( X , X% , X

e

, X tr (10)

)

3. If the underlying distribution is uniform, the best estimator is X e this estimator is influenced by outlying observations, but the lack of tails makes this impossible.

Slide 18

Stat 110A, UCLA, Ivo Dinov

3

The Estimator for µ

( X , X% , X

e

, X tr (10)

)

4. The trimmed mean X tr (10) works reasonably well in all three situations but is not the best for any of them.

Slide 19

its standard deviation σ θˆ = V (θˆ) . If the standard error itself involves unknown parameters whose values can be estimated, substitution into σ θˆ yields the estimated standard error of the estimator, denoted σˆθˆ or sθˆ . Slide 20

Stat 110A, UCLA, Ivo Dinov

Moments

Methods of Point Estimation Stat 110A, UCLA, Ivo Dinov

Moment Estimators Let X1, X2,…,Xn be a random sample from a distribution with pmf or pdf

f ( x;θ1 ,...,θ m ), where θ1 ,...,θ m

are parameters whose values are unknown. Then the moment estimators θ1 ,...,θ m are obtained by equating the first m sample moments to the corresponding first m population moments and solving for θ1 ,...,θ m . Slide 23

The standard error of an estimator θˆ is

Stat 110A, UCLA, Ivo Dinov

6.2

Slide 21

Standard Error

Stat 110A, UCLA, Ivo Dinov

Let X1, X2,…,Xn be a random sample from a pmf or pdf f (x). For k = 1, 2,… the kth population moment, or kth moment of the distribution f (x) is E ( X k ). The kth sample moment is

1 n k   ∑ i =1 X i . n Slide 22

Stat 110A, UCLA, Ivo Dinov

Likelihood Function Let X1, X2,…,Xn have joint pmf or pdf

f ( x1 ,..., xn ;θ1 ,...,θ m ) where parameters θ1 ,...,θ m

have unknown values. When x1,…,xn are the observed sample values and f is regarded as a function of θ1 ,...,θ m , it is called the likelihood function.

Slide 24

Stat 110A, UCLA, Ivo Dinov

4

Maximum Likelihood Estimators

The Invariance Principle

The maximum likelihood estimates (mle’s) θˆ1 ,...,θˆm are those values of the θ i 's that maximize the likelihood function so that

Let θˆ1 ,...,θˆm be the mle’s of the parameters θ1 ,...,θ m Then the mle of any function h(θ1 ,...,θ m ) of these parameters is the function h(θˆ1 ,...,θˆm ) of the mle’s.

f ( x1 ,..., xn ;θˆ1 ,...,θˆm ) ≥ f ( x1 ,..., xn ;θ1 ,...,θ m )

for all θ1 ,...,θ m

When the Xi’s are substituted in the place of the xi’s, the maximum likelihood estimators result. Slide 25

Slide 26

Stat 110A, UCLA, Ivo Dinov

Desirable Property of the Maximum Likelihood Estimate Under very general conditions on the joint distribution of the sample, when the sample size n is large, the maximum likelihood estimator of any parameter θ is approx. unbiased [ E (θˆ) ≈ θ ] and has variance that is nearly as small as can be achieved by any estimator.

mleθˆ ≈ MVUE of θ Slide 27

(Log)Likelihood Function z Suppose we have a sample {X1, …, Xn} IID D(θ) with probability density function p = p(X | θ). Then the joint density p({X1, …, Xn} | θ) is a function of the (unknown) parameter θ.

z Likelihood function l(θ | {X1, …, Xn})= p({X1,…,Xn}|θ) z Log-likelihood L(θ|{X1, …, Xn})=Logel(θ|{X1, …, Xn}) z Maximum-likelihood estimation (MLE): z Suppose {X1, …, Xn} IID N(µ, σ2), µ is unknown. We estimate it by:MLE(µ)=µ^=ArgMaxµL(µ| ({X1,…,Xn}) Slide 28

Stat 110A, UCLA, Ivo Dinov

(Log)Likelihood Function



1

2 2  − ∑in=1 ( xi − µˆ ) 2σ  ∑in=1 2( xi − µˆ )  2σ 2 

e n  2πσ 2 2

(

)

∑n x ⇔ 0 = 2∑in=1( xi − µˆ ) ⇔ µˆ = i =1 i

n

.

Similarly can show that : MLE (σ ) = σˆ = Slide 29

2 ∑in=1( xi − µ )

Stat 110A, UCLA, Ivo Dinov

Stat 110A, UCLA, Ivo Dinov

(Log)Likelihood Function

z Suppose {X1, …, Xn} IID N(µ, σ2), µ is unknown. We estimate it by:MLE(µ)=µ^=ArgMaxµL(µ| ({X1,…,Xn}) − ( x − µ ) 2 2σ 2   n e i   MLE ( µ ) = Log   = L( µ ) 2 i =1   2 πσ   0 = L' ( µˆ ) =

Stat 110A, UCLA, Ivo Dinov

. n −1

z Suppose {X1, …, Xn} IID Poisson(λ), λ is unknown. Estimate λ by:MLE(λ)=λ^=ArgMaxλL(λ|({X1,…,Xn})  MLE (λ ) = Log   

n

∏i=1

e− λ λx

i   = L (λ ) ( xi )! 

   e− nλ λ∑in=1 xi  ∂ ˆ = 0 = L ' (λ ) = Log  n ∂λ  ( xi )!    i =1



=

1 ∂ ∑n x − nλ + Log (λ ) ∑in=1 xi = − n + ∑in=1 xi ⇔ λˆ = i =1 i . n ∂λ λ

(

)

Slide 30

Stat 110A, UCLA, Ivo Dinov

5

Suggest Documents