Consider the linear model with n observations y ~ N (X, 2In), since we have many predictors,

Question:

Consider the linear model with n observations y ~ N (Xβ, σ2In), since we have many predictors, β is a vector.

(a) Write down the log-likelihood ℓ(β; y) of the model and the score U(β) = ∂ℓ/∂ β.

(b) Under certain regularity conditions it can be shown that the expected score is zero and thus that Fisher information I(β), the variance of the score, is


де ( де дв ( ав П(8) — Var(U (3)] — Е


Show that I(β)–1 is the same as the variance of the MLE β of β, and thus that the information does not depend on β.

(c) The same regularity conditions imply that



That is, that the information is also the negative Hessian of the log-likelihood. Verify that this is true by computing the negative Hessain and showing it equals the inverse of the variance ofβ.

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question

Probability And Statistics

ISBN: 9780321500465

4th Edition

Authors: Morris H. DeGroot, Mark J. Schervish

Question Posted: