Suppose that the full model is (y_{i}=beta_{0}+beta_{1} x_{i 1}+beta_{2} x_{i 2}+varepsilon_{i}, i=1,2, ldots, n), where (x_{i 1})

Question:

Suppose that the full model is \(y_{i}=\beta_{0}+\beta_{1} x_{i 1}+\beta_{2} x_{i 2}+\varepsilon_{i}, i=1,2, \ldots, n\), where \(x_{i 1}\) and \(x_{i 2}\) have been coded so that \(S_{11}=S_{22}=1\). We will also consider fitting a subset model, say \(y_{i}=\beta_{0}+\beta_{1} x_{i 1}+\varepsilon_{i}\).
a. Let \(\hat{\beta}_{1}^{*}\) be the least-squares estimate of \(\beta_{1}\) from the full model. Show that \(\operatorname{Var}\left(\hat{\beta}_{1}^{*}\right)=\sigma^{2} /\left(1-r_{12}^{2}\right)\), where \(r_{12}\) is the correlation between \(x_{1}\) and \(x_{2}\).
b. Let \(\hat{\beta}_{1}\) be the least-squares estimate of \(\beta_{1}\) from the subset model. Show that \(\operatorname{Var}\left(\hat{\beta}_{1}\right)=\sigma^{2}\). Is \(\beta_{1}\) estimated more precisely from the subset model or from the full model?
c. Show that \(E\left(\hat{\beta}_{1}\right)=\beta_{1}+r_{12} \beta_{2}\). Under what circumstances is \(\hat{\beta}_{1}\) an unbiased estimator of \(\beta_{1}\) ?
d. Find the mean square error for the subset estimator \(\hat{\beta}_{1}\). \(\operatorname{Compare} \operatorname{MSE}\left(\hat{\beta}_{1}\right)\) with \(\operatorname{Var}\left(\hat{\beta}_{1}^{*}\right)\). Under what circumstances is \(\hat{\beta}_{1}\) a preferable estimator, with respect to MSE?
You may find it helpful to reread Section 10.1.2.

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question

Introduction To Linear Regression Analysis

ISBN: 9781119578727

6th Edition

Authors: Douglas C. Montgomery, Elizabeth A. Peck, G. Geoffrey Vining

Question Posted: