Question: 17.1 Consider the regression model without an intercept term, Yi = b1Xi + ui (so the true value of the intercept, b0, is zero). a.
17.1 Consider the regression model without an intercept term, Yi = b1Xi + ui
(so the true value of the intercept, b0, is zero).
a. Derive the least squares estimator of b1 for the restricted regression model Yi = b1Xi + ui. This is called the restricted least squares estimator
(b nRLS 1 ) of b1 because it is estimated under a restriction, which in this case is b0 = 0.
b. Derive the asymptotic distribution of b nRLS 1 under Assumptions #1 through #3 of Key Concept 17.1.
c. Show that b nRLS 1 is linear [Equation (5.24)] and, under Assumptions #1 and #2 of Key Concept 17.1, conditionally unbiased [Equation (5.25)].
d. Derive the conditional variance of b nRLS 1 under the Gauss–Markov conditions (Assumptions #1 through #4 of Key Concept 17.1).
e. Compare the conditional variance of b nRLS 1 in
(d) to the conditional variance of the OLS estimator b n
1 (from the regression including an intercept) under the Gauss–Markov conditions. Which estimator is more efficient? Use the formulas for the variances to explain why.
f. Derive the exact sampling distribution of b nRLS 1 under Assumptions #1 through #5 of Key Concept 17.1.
g. Now consider the estimator b
1 = gni
= 1Yi>gni
= 1Xi. Derive an expression for var(b
1 X1,
c, Xn) - var(b nRLS 1 0X1,
c, Xn) under the Gauss–Markov conditions and use this expression to show that var(b
1 0X1,
c, Xn) Ú var(b n
1 RLS 0X1,
c, Xn).
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
