Ch. 11 Logistic Regression. The Model. Interpretation of the Parameters. Parameter Estimation. Inference. Model Checking

Ch. 11 Logistic Regression • The Model • Interpretation of the Parameters • Parameter Estimation • Inference • Model Checking 1 Generalized Li...
Author: Geoffrey Hunter
1 downloads 0 Views 81KB Size
Ch. 11 Logistic Regression

• The Model

• Interpretation of the Parameters

• Parameter Estimation

• Inference

• Model Checking

1

Generalized Linear Models

• Many data sets do not satisfy the normality and constant variance assumptions. In the past, data analysts treated this problem with clever transformations of the response variable which would make it behave more like a normal random variable with a constant variance.

• However, a more direct approach is available, if a reasonable model can be constructed. The parameters for the model can be estimated by maximum likelihood.

2

11.1 The Logistic Regression Model • Data: Responses (yi’s) are categorical variables with 2 levels (coded as 1 and 0). Predictor values (xi’s) can be categorical and/or numeric. • For i = 1, 2, . . . , n, P (yi = 1) = πi P (yi = 0) = 1 − πi where πi are probabilities (values are between 0 and 1). • Such data may come in the form of a two-way contingency table (see section 11.2.1) • We are interested in modelling πi in terms of xi: πi = π(xi). 3

Logistic Regression

Another possibility: yi is a Binomial random variable, whose proportion parameter p depends on the predictor variable xi: yi ∼ bin(ni, p(xi)) E[y|x] = nip(xi) V (y|x) = nip(xi)(1 − p(xi))

If p(x) is not constant, then the variance cannot be constant.

4

The Link Function

There is a function g(z), called the link function which satisfies: g(π(x)) = β0 + β1x We will consider 2 possible link functions: identity: g(p) = p p logit: g(p) = log 1−p





Others exist. The logit is preferred.

5

11.2 Interpretation of the Logit Link • Odds: π(x) 1 − π(x) This is the ratio of the probability of occurrence of an event to the probability of it not occurring. Odds can take on any value between 0 and infinity. e.g. Suppose π(x) = .75. Then the odds of the event is .75/.25 = 3. e.g. If the odds of an event is 2, then, you can find that the probability of the event is 2/(2+1) = 2/3. i.e. odds(x) . π(x) = odds(x) + 1 6

Interpretation of the Logit Link

• Log Odds: π(x) log 1 − π(x)

!

This is the logit link function. Its main advantage is that it can take values on the entire real line.

• Thus, we can equate it to β0 + β1x for any given β0 and β1, and x does not have to be restricted in any way. We will always obtain valid log odds.

Using the identity link leads to restrictions on x so that valid probabilities will be obtained.

7

Interpretation of the Logit Link

• Consider log

π(x) 1 − π(x)

!

= β0 + β1 x

• Then !

!

π(x + 1) π(x) β = log − log . 1 − π(x + 1) 1 − π(x) • Thus, β is the log of the odds ratio for the occurrence of the event at x + 1 to the occurrence of the event at x.

8

11.3 Parameter Estimation

• Data: x1 x2 ··· xm

n1 n2 ··· nm

y1 y2 ··· ym

• Likelihood: m Y

m Y

!

ni yi p (1 − p)ni−yi P (Yi = yi) = i=1 yi i=1

9

Example ◦ xi = 0 for i = 1, 2, 3, 4, 5; xi = 1 for i = 6, 7, 8, 9, 10. ◦ identity link function: p(xi) = β0 + β1xi. ◦ Log Likelihood: 10 X

ni `(β0, β1) = log yi i=1 + + +

5 X i=1 10 X i=6 10 X

!

+

5 X

yi log(β0)

i=1

(ni − yi) log(1 − β0) yi log(β0 + β1) (ni − yi) log(1 − β0 − β1)

i=6 10

Example (Cont’d)

◦ Differentiating the log likelihood with respect to β0 and β1 and setting the derivatives to 0, gives the estimators: P5 yi i=1 b β0 = P5 i=1 ni

and P10 yi i=6 b β1 = P10 i=6 ni

11

Example (Cont’d)

• Fitting the model with glm(): ◦ Enter the data into a dataframe: > > > >

x sum(y[6:10])/sum(n[6:10]) [1] 0.7096774 > sum(y[6:10])/sum(n[6:10])-.3225806 [1] 0.3870968

14

Example (Cont’d)

◦ You can use the predict function to compute fitted values: > predict(nxy.glm, newdata= data.frame(x = c(0, 1), n = c(10,15))) 1 2 0.3225806 0.7096774 . . . a bit boring in this case, since there are only 2 possibilities . . .

15

Example (Cont’d)

• Using the logit link: ◦ p log 1−p

!

= β0 + β1 x

or eβ0+β1x p= 1 + eβ0+β1x ◦ The above expression can be plugged into the likelihood function. ◦ Again, the likelihood can be maximized with respect to β0 and β1 to get parameter estimates. Formulas can be obtained without difficulty in this case, and this is left as an exercise. 16

Example (Cont’d) ◦ Fitting the model with glm(): > nxy.glm summary(nxy.glm) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.7419 0.3842 -1.931 0.05348 . x 1.6358 0.5515 2.966 0.00302 **

◦ The fitted model: log

p 1−p

d

!

= −.742 + 1.64x

Note you have to apply the inverse of the logit transformation to get back probabilities. 17

Beetle Example

• In an experiment (Bliss, 1935), some beetles were exposed to gaseous carbon disulphide at various concentrations for 5 hours. The numbers killed at each dose were recorded. ◦ beetles.txt contains the following lines: dose n.insects 1.6907 59 1.7552 62 1.8113 63 1.8610 62 1.7242 60 1.7842 56 1.8369 59 1.8839 60

n.killed 6 18 52 61 13 28 53 60 18

Example (Cont’d)

◦ Read in the data: beetles beetles.glm summary(beetles.glm) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -60.717 5.181 -11.72

Suggest Documents