In Section 9.5.3, we described how a generalized least squares (GLS) estimator for (alpha) and (beta_{0}) in

Question:

In Section 9.5.3, we described how a generalized least squares (GLS) estimator for \(\alpha\) and \(\beta_{0}\) in the regression model \(y_{t}=\alpha+\beta_{0} x_{t}+e_{t}\), with \(\operatorname{AR}(1)\) errors \(e_{t}=ho e_{t-1}+v_{t}\) and known \(ho\), can be computed by applying OLS to the transformed model \(y_{t}^{*}=\alpha^{*}+\beta_{0} x_{t}^{*}+v_{t}\) where \(y_{t}^{*}=y_{t}-ho y_{t-1}, \alpha^{*}=\alpha(1-ho)\) and \(x_{t}^{*}=x_{t}-ho x_{t-1}\). In large samples, the GLS estimator is minimum variance because the \(v_{t}\) are homoskedastic and not autocorrelated. However, \(x_{t}^{*}\) and \(y_{t}^{*}\) can only be found for \(t=2,3, \ldots, T\). One observation is lost through the transformation. To ensure the GLS estimator is minimum variance in small samples, a transformed observation for \(t=1\) has to be included. Let \(e_{1}^{*}=\sqrt{1-ho^{2}} e_{1}\).

a. Using results in Appendix \(9 \mathrm{~B}\), show that \(\operatorname{var}\left(e_{1}^{*}\right)=\sigma_{v}^{2}\) and that \(e_{1}^{*}\) is uncorrelated with \(v_{t}\), \(t=2,3, \ldots, T\).

b. Explain why the result in (a) implies OLS applied to the following transformed model will yield a minimum variance estimator

image text in transcribed

where \(y_{t}^{*}=y_{t}-ho y_{t-1}, j_{t}=1-ho, x_{t}^{*}=x_{t}-ho x_{t-1}\), and \(e_{t}^{*}=e_{t}-ho e_{t-1}=v_{t}\) for \(t=2,3, \ldots, T\), and, for \(t=1, y_{1}^{*}=\sqrt{1-ho^{2}} y_{1}, j_{1}=\sqrt{1-ho^{2}}\), and \(x_{1}^{*}=\sqrt{1-ho^{2}} x_{1}\). This estimator, particularly when it is used iteratively with an estimate of \(ho\), is often known as the Prais-Winsten estimator.

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question

Principles Of Econometrics

ISBN: 9781118452271

5th Edition

Authors: R Carter Hill, William E Griffiths, Guay C Lim

Question Posted: