New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
econometrics
Econometrics 1st Edition Bruce Hansen - Solutions
Prove Theorem8.1.
Verify that for e¯cls defined in (8.8) that R0 e¯cls Æ c.
In the linear projection model Y Æ ®ÅX0¯Åe consider the restriction ¯ Æ 0.(a) Find the CLS estimator of ® under the restriction ¯ Æ 0.(b) Find an expression for the efficient minimum distance estimator of ® under the restriction ¯ Æ 0.
In the model Y Æ X0 1¯1 Å X0 2¯2 Åe, with ¯1 and ¯2 each k £1, find the CLS estimator of¯ Æ (¯1,¯2) subject to the constraint that ¯1 Æ ¡¯2.
In the model Y Æ X0 1¯1ÅX0 2¯2Åe, show directly from definition (8.3) that the CLS estimator of ¯ Æ (¯1,¯2) subject to the constraint ¯1 Æ c (where c is some given vector) is OLS of Y ¡X0 1c on X2.
In the model Y Æ X0 1¯1ÅX0 2¯2Åe, show directly from definition (8.3) that the CLS estimator of ¯ Æ (¯1,¯2) subject to the constraint that ¯2 Æ 0 is the OLS regression of Y on X1.
As in Exercise 3.26, use the cps09mar dataset and the subsample of white male Hispanics.Estimate the regression loág(wage) Æ ¯1 educationů2 experienceů3 experience2/100ů4.(a) Report the coefficient estimates and robust standard errors.(b) Let µ be the ratio of the return to one year of
The model is Y Æ X0¯Åe with E[e j X] Æ 0. An econometrician is worried about the impact of some unusually large values of the regressors. The model is thus estimated on the subsample for which jXi j · c for some fixedc. Let e¯ denote the OLS estimator on this subsample. It equals e¯ ÆÃXn
Take the regression model Y Æ X0¯Åe E[e j X] Æ 0 E£e2 j X Æ x¤Æ ¾2(x)with X 2 Rk . Assume that P[e Æ 0] Æ 0. Consider the infeasible estimator e¯ ÆÃXn iÆ1 e¡2 i Xi X0 i!¡1 ÃXn iÆ1 e¡2 i XiYi!.This is aWLS estimator using the weights e¡2 i .(a) Find the asymptotic distribution
Take the projection model Y Æ X0¯Åe with E[Xe] Æ 0. For a positive function w(x) let Wi Æ w(Xi ). Consider the estimator e¯ ÆÃXn iÆ1 Wi Xi X0 i!¡1 ÃXn iÆ1 Wi XiYi!.Find the probability limit (as n !1) of e¯. Do you need to add an assumption? Is e¯ consistent for e¯? If not, under
The parameter ¯ is defined in the model Y Æ X¤¯Åe where e is independent of X¤ ¸ 0, E[e] Æ 0, E£e2¤Æ ¾2. The observables are (Y ,X) where X Æ X¤v and v È 0 is random scale measurement error, independent of X¤ ande. Consider the least squares estimator b¯ for ¯.(a) Find the plim of
The model is Y Æ X Åe with E[e j X] Æ 0 and X 2 R. Consider the estimator e¯ Æ1 nXn iÆ1 Yi Xi.Find conditions under which e¯ is consistent for ¯ as n!1.
Take themodel Y Æ X0¯Åe E[e j X] Æ 0 Z Æ X0¯°Åu E[u j X] Æ 0 where X is a k vector and Z is scalar. Your goal is to estimate the scalar parameter °. You use a two-step estimator:• Estimate b¯ by least squares of Y on X.• Estimate b° by least squares of Z on X0 b¯.(a) Show that b°
Take themodel Y Æ X0¯Åe E[e j X] Æ 0 E£e2 j X¤Æ Z0°where Z is a (vector) function of X. The sample is i Æ 1, ...,n with i.i.d. observations. Assume that Z0° È 0 for all Z. Suppose you want to forecast YnÅ1 given XnÅ1 Æ x and ZnÅ1 Æ z for an out-of-sample observation n Å1. Describe
The variables {Yi ,Xi ,Wi } are a randomsample. The parameter ¯ is estimated by minimizing the criterion function S(¯) ÆXn iÆ1 Wi¡Yi ¡X0 i¯¢2 That is b¯ Æ argmin¯ S(¯).(a) Find an explicit expression for b¯.(b) What population parameter ¯ is b¯ estimating? Be explicit about any
Take the model Y Æ X0¯Åe with E[Xe] Æ 0 and suppose you have observations i Æ 1, ...,2n.(The number of observations is 2n.) You randomly split the sample in half, (each has n observations), calculate b¯1 by least squares on the first sample, and b¯2 by least squares on the second sample.
Suppose an economic model suggests m(x) Æ E[Y j X Æ x] Æ ¯0 ů1x ů2x2 where X 2 R. You have a randomsample (Yi ,Xi ), i Æ 1, ...,n.(a) Describe how to estimatem(x) at a given value x.(b) Describe (be specific) an appropriate confidence interval form(x).
An economist reports a set of parameter estimates, including the coefficient estimates b¯1 Æ 1.0, b¯2 Æ 0.8, and standard errors s( b¯1) Æ 0.07 and s( b¯2) Æ 0.07. The author writes “The estimates show that ¯1 is larger than ¯2.”(a) Write down the formula for an asymptotic 95%
From an i.i.d. sample (Yi ,Xi ) of size n you randomly take half the observations. You estimate a least squares regression of Y on X using only this sub-sample. Is the estimated slope coefficient b¯ consistent for the population projection coefficient? Explain your reasoning.
Take the linear model Y Æ X¯Åe with E[e j X] Æ 0 and X 2 R. Consider the estimator b¯ ÆPniÆ1 X3 i Yi PniÆ1 X4 i.Find the asymptotic distribution of pn¡ b¯¡¯¢as n!1.
Take themodel Y Æ X1¯1 ÅX2¯2 Åe E[Xe] Æ 0 with both ¯1 2 R and ¯2 2 R, and define the parameter µ Æ ¯1¯2.(a) What is the appropriate estimator bµ for µ?(b) Find the asymptotic distribution of bµ under standard regularity conditions.(c) Show how to calculate an asymptotic 95%
Consider an i.i.d. sample {Yi ,Xi } i Æ 1, ...,n where Y and X are scalar. Consider the reverse projection model X Æ Y °Åu with E[Y u] Æ 0 and define the parameter of interest as µ Æ 1/°.(a) Propose an estimator b° of °.(b) Propose an estimator bµ of µ.(c) Find the asymptotic
Consider the model Y Æ ®Å¯X Åe E[e] Æ 0 E[Xe] Æ 0 with both Y and X scalar. Assuming ® È 0 and ¯ Ç 0 suppose the parameter of interest is the area under the regression curve (e.g. consumer surplus), which is A Æ ¡®2/2¯.Let bµ Æ (b®, b¯)0 be the least squares estimators of µ Æ
Take a regression model with i.i.d. observations (Yi ,Xi ) with X 2 R Y Æ X¯Åe E[e j X] Æ 0Æ E£X2e2¤.Let b¯ be the OLS estimator of ¯ with residuals bei Æ Yi ¡Xi b¯. Consider the estimators of eÆ1 nXn iÆ1 X2 i e2 ibÆ1 nXn iÆ1 X2 i be2 i .(a) Find the asymptotic distribution
In the homoskedastic regression model Y Æ X0¯Åe with E[e j x] Æ 0 and E£e2 j X¤Æ ¾2 suppose b¯ is the OLS estimator of ¯ with covariance matrix estimator bV b¯ based on a sample of size n. Let b¾2 be the estimator of ¾2. You wish to forecast an out-of-sample value of YnÅ1 given that
The model is Y Æ X¯Åe with E[e j X] Æ 0 and X 2 R. Consider the two estimators b¯ ÆPniÆ1 XiYi PniÆ1 X2 ie¯ Æ1 nXn iÆ1 Yi Xi.(a) Under the stated assumptions are both estimators consistent for ¯?(b) Are there conditions under which either estimator is efficient?
Find the asymptotic distribution of pn¡b¾2 ¡¾2¢as n!1.
Of the variables (Y ¤,Y ,X) only the pair (Y ,X) are observed. In this case we say that Y ¤ is a latent variable. Suppose Y ¤Æ X0¯Åe E[Xe] Æ 0 Y Æ Y ¤Åu where u is a measurement error satisfying E[Xu] Æ 0 E£Y ¤u¤Æ 0.Let b¯ denote the OLS coefficient fromthe regression of Y on X.(a)
The model is Y Æ X0¯Åe E[Xe] Æ 0Æ E£X X0e2¤.Find the method of moments estimators ( b¯,b) for¡¯,¢.
Show (7.13)-(7.16).
Verify some of the calculations reported in Section 7.4. Specifically, suppose that X1 and X2 only take the values {¡1,Å1}, symmetrically, with P[X1 Æ X2 Æ 1] Æ P[X1 Æ X2 Æ ¡1] Æ 3/8 P[X1 Æ 1,X2 Æ ¡1] Æ P[X1 Æ ¡1,X2i Æ 1] Æ 1/8 E£e2 ij X1 Æ X2¤Æ5 4E£e2 ij X1 6Æ X2¤Æ1
For the ridge regression estimator (7.43), set ¸ Æ cn where c È 0 is fixed as n !1. Find the probability limit of b¯ as n!1.
Take themodel Y Æ X0¯Åe with E[Xe] Æ 0. Define the ridge regression estimator b¯ ÆÃXn iÆ1 Xi X0 iŸI k!¡1 ÃXn iÆ1 XiYi!(7.43)here ¸ È 0 is a fixed constant. Find the probability limit of b¯ as n!1. Is b¯ consistent for ¯?
Take themodel Y Æ X0 1¯1ÅX0 2¯2Åe with E[Xe] Æ 0. Suppose that ¯1 is estimated by regressing Y on X1 only. Find the probability limit of this estimator. In general, is it consistent for ¯1? If not, under what conditions is this estimator consistent for ¯1?
In the normal regression model let s2 be the unbiased estimator of the error variance ¾2 from (4.31).(a) Show that var£s2¤Æ 2¾4/(n ¡k).(b) Show that var£s2¤is strictly larger than the Cramér-Rao Lower Bound for ¾2.
Show (5.20).
Show that the test “Reject H0 if LR ¸ c1” for LR defined in (5.18), and the test “Reject H0 if F ¸ c2” for F defined in (5.19), yield the same decisions if c2 Æ¡exp(c1/n)¡1¢(n ¡k)/q. Does this mean that the two tests are equivalent?
Let b C¯ Æ [L,U] be a 1¡® confidence interval for ¯, and consider the transformation µ Æ g (¯)where g (¢) is monotonically increasing. Consider the confidence interval b Cµ Æ [g (L), g (U)] for µ. Show that P£µ 2 b Cµ¤Æ P£¯ 2 b C¯¤. Use this result to develop a confidence
Let F(u) be the distribution function of a random variable X whose density is symmetric about zero. (This includes the standard normal and the student t .) Show that F(¡u) Æ 1¡F(u).
In the normal regression model show that the robust covariance matrices bV HC0 b¯ , bV HC1 b¯ , bV HC2 b¯ , and bV HC3 b¯ are independent of the OLS estimator b¯, conditional on X .
In the normal regression model show that the leave-one out prediction errors eei and the standardized residuals ei are independent of b¯, conditional on X .Hint: Use (3.45) and (4.29).
For the regression in-sample predicted values b Yi show that b Yi j X » N¡X0 i¯,¾2hi i¢where hi i are the leverage values (3.40).
Show that argmaxµ2£ `n(µ) Æ argmaxµ2£ Ln(µ).
Show that if e »N(0,§) and § Æ AA0 then u Æ A¡1e »N(0, I n) .
Show that if e »N¡0, I n¾2¢and H0H Æ I n then u Æ H0e »N¡0, I n¾2¢.
Show that if Q » Â2r, then E[Q] Æ r and var[Q] Æ 2r.Hint: Use the representation Q ÆPniÆ1 Z2 i with Zi independent N(0,1) .
Extend the empirical analysis reported in Section 4.21 using the DDK2011 dataset on the textbook website.. Do a regression of standardized test score (totalscore normalized to have zero mean and variance 1) on tracking, age, gender, being assigned to the contract teacher, and student’s percentile
Continue the empirical analysis in Exercise 3.26. Calculate standard errors using the HC3 method. Repeat in your second programming language. Are they identical?
Continue the empirical analysis in Exercise 3.24.(a) Calculate standard errors using the homoskedasticity formula and using the four covariance matrices from Section 4.14.(b) Repeat in a second programming language. Are they identical?
Take the linear regression model with E[Y j X ] Æ X ¯. Define the ridge regression estimator b¯ Æ¡X 0X ÅI k¸¢¡1 X 0Y where ¸ È 0 is a fixed constant. Find E£ b¯ j X¤. Is b¯ biased for ¯?
An economist friend tells you that the assumption that the observations (Yi ,Xi ) are i.i.d.implies that the regression Y Æ X0¯Åe is homoskedastic. Do you agree with your friend? How would you explain your position?
The model is Yi Æ X0 i¯Åei E[ei j Xi ] Æ 0 E£e2 ij Xi¤Æ ¾2i§ Æ diag{¾21, ...,¾2 n}.The parameter ¯ is estimated by OLS b¯ Æ¡X 0X¢¡1 X 0Y and GLS e¯ Æ¡X 0§¡1X¢¡1 X 0§¡1Y . Let be Æ Y ¡X b¯and e Æ Y ¡ X e¯ denote the residuals. Let b R2 Æ 1¡be0be/(Y ¤0Y ¤) and e
Take themodel in vector notation Y Æ X ¯Åe E[e j X ] Æ 0 E£ee0 j X¤Æ §.Assume for simplicity that § is known. Consider the OLS and GLS estimators b¯ Æ¡X 0X ¢¡1 ¡X 0Y ¢and e¯ Æ¡X 0§¡1X ¢¡1 ¡X 0§¡1Y ¢. Compute the (conditional) covariance between b¯ and e¯ :E h¡
Let Y be n £1, X be n £k, and X ¤ Æ XC where C is k £k and full-rank. Let b¯ be the least squares estimator from the regression of Y on X , and let bV be the estimate of its asymptotic covariance matrix. Let b¯¤ and bV ¤ be those from the regression of Y on X ¤. Derive an expression for
Take themodel Y Æ X0 1¯1 ÅX0 2¯2 Åe E[e j X] Æ 0 E£e2 j X¤Æ ¾2 where X Æ (X1,X2), with X1 k1£1 and X2 k2£1. Consider the short regression Yi Æ X0 1i b¯1Åbei and define the error variance estimator s2 Æ (n ¡k1)¡1PniÆ1 be2 i . Find E£s2 j X¤.
Suppose that for the random variables (Y ,X) with X È 0 an economic model implies E[Y j X] Æ¡°ÅµX¢1/2 . (4.64)A friend suggests that you estimate ° and µ by the linear regression of Y 2 on X, that is, to estimate the equation Y 2 Æ ®Å¯X Åe. (4.65)(a) Investigate your friend’s
Take the linear homoskedastic CEF Y ¤Æ X0¯Åe (4.63)E[e j X] Æ 0 E£e2 j X¤Æ ¾2 and suppose that Y ¤ is measured with error. Instead of Y ¤, we observe Y Æ Y ¤ Åu where u is measurement error. Suppose that e and u are independent and E[u j X] Æ 0 E £u2 j X ¤Æ ¾2 u(X)(a) Derive an
Consider an i.i.d. sample {Yi ,Xi } i Æ 1, ...,n where X is k£1. Assume the linear conditional expectation model Y Æ X0¯Åe with E[e j X] Æ 0. Assume that n¡1X 0X Æ I k (orthonormal regressors).Consider the OLS estimator b¯.(a) Find V b¯Æ var£ b¯¤(b) In general, are b¯j and b¯` for j
Take a regression model Y Æ X¯Åe with E[e j X] Æ 0 and i.i.d. observations (Yi ,Xi ) and scalar X. The parameter of interest is µ Æ ¯2. Consider the OLS estimators b¯ and bµ Æ b¯2.(a) Find E£bµ j X¤using our knowledge of E£ b¯ j X¤and Vb¯Æ var£ b¯ j X¤. Is bµ biased for
Take the simple regression model Y Æ X¯Åe, X 2 R, E[e j X] Æ 0. Define ¾2iÆ E£e2 ij Xi¤and ¹3i Æ E£e3 ij Xi¤and consider the OLS coefficient b¯. Find E h¡ b¯¡¯¢3 j X i.
Let ¹ Æ E[Y ] , ¾2 Æ E h¡Y ¡¹¢2 iand ¹3 Æ E h¡Y ¡¹¢3 iand consider the sample mean Y Æ1n PniÆ1 Yi . Find E·³Y ¡¹´3¸as a function of ¹, ¾2, ¹3 and n.
Show (4.41) in the homoskedastic regressionmodel.
Prove (4.40).
Show (4.32) in the homoskedastic regressionmodel.
Let (Yi ,Xi ) be a random sample with E[Y j X] Æ X0¯. Consider theWeighted Least Squares(WLS) estimator e¯wls Æ¡X 0W X¢¡1 ¡X 0WY¢whereW Æ diag(w1, ...,wn) and wi Æ X¡2 j i , where Xj i is one of the Xi .(a) In which contexts would e¯wls be a good estimator?(b) Using your intuition, in
Let e¯ be the GLS estimator (4.22) under the assumptions (4.18) and (4.19). Assume that §is known and ¾2 is fdunknown. Define the residual vector e Æ Y ¡X e¯, and an estimator for ¾2 e¾2 Æ1 n ¡k e0§¡1ee.(a) Show (4.23).(b) Show (4.24).(c) Prove that e ÆM1e, where M1 Æ I ¡X¡X
Prove Theorem4.5 under the restriction to linear estimators.
Prove (4.20) and (4.21).
True or False. If Y Æ X0¯Åe, X 2 R, E[e j X] Æ 0, and bei is the OLS residual from the regression of Yi on Xi , then PniÆ1 X2 i bei Æ 0.
Explain the difference between Y and ¹. Explain the difference between n¡1PniÆ1 Xi X0 i and E£Xi X0 i¤.
Calculate E·³Y ¡¹´3¸, the skewness of Y . Under what condition is it zero?
For some integer k, set ¹k Æ E[Y k ].(a) Construct an estimator b¹k for ¹k .(b) Show that b¹k is unbiased for ¹k .(c) Calculate the variance of b¹k , say var£b¹k¤. What assumption is needed for var£b¹k¤to be finite?(d) Propose an estimator of var£b¹k¤.
Use the cps09mar data set.(a) Estimate a log wage regression for the subsample of white male Hispanics. In addition to education, experience, and its square, include a set of binary variables for regions and marital status. For regions, create dummy variables for Northeast, South, and West so that
Estimate equation (3.49) as in part (a) of the previous question. Let bei be the OLS residual, b Yi the predicted value from the regression, X1i be education and X2i be experience. Numerically calculate the following:(a)PniÆ1 bei(b)PniÆ1 X1i bei(c)PniÆ1 X2i bei(d)PniÆ1 X2 1i bei(e)PniÆ1 X2 2i
Use the cps09mar data set described in Section 3.22 and available on the textbookwebsite.Take the sub-sample used for equation (3.49) (see Section 3.25) for data construction)(a) Estimate equation (3.49) and compute the equation R2 and sumof squared errors.(b) Re-estimate the slope on education
The data matrix is (Y ,X )with X Æ [X 1,X 2] , and consider the transformed regressor matrix Z Æ [X 1,X 2 ¡X 1] . Suppose you do a least squares regression of Y on X , and a least squares regression of Y on Z. Let b¾2 and e¾2 denote the residual variance estimates from the two regressions.
You estimate a least squares regression Yi Æ X0 1i e¯1 Å eui and then regress the residuals on another set of regressors eui Æ X0 2i e¯2 Å eei Does this second regression give you the sameestimated coefficients as fromestimation of a least squares regression on both set of regressors?Yi Æ X0
Consider the least squares regression estimators Yi Æ X1i b¯1 ÅX2i b¯2 Å bei and the “one regressor at a time” regression estimators Yi Æ X1i e¯1 Å ee1i , Yi Æ X2i e¯2 Å ee2i Under what condition does e¯1 Æ b¯1 and e¯2 Æ b¯2?
Define the leave-one-out estimator of ¾2, b¾2(¡i )Æ1 n ¡1 Xj 6Æi³Yj ¡X0 jb¯(¡i )´2.This is the estimator obtained from the sample with observation i omitted. Show that b¾2 (¡i)Æn n ¡1 b¾2 ¡be2 i(n ¡1) (1¡hi i ).
For the intercept-only model Yi Æ ¯Åei , show that the leave-one-out prediction error is eei Ƴ n n ¡1´ ³Yi ¡Y´.
For which observations will b¯(¡i ) Æ b¯?
For e¾2 defined in (3.46), show that e¾2 ¸b¾2. Is equality possible?
Consider two least squares regressions Y Æ X 1 e¯1 Åee and Y Æ X 1 b¯1 ÅX 2 b¯2 Åbe.Let R2 1 and R2 2 be the R-squared from the two regressions. Show that R2 2¸ R2 1. Is there a case (explain)when there is equality R2 2Æ R2 1?
Prove that R2 is the square of the sample correlation between Y and bY .
Let b¯n Æ¡X 0 nX n¢¡1 X 0 nY n denote the OLS estimate when Y n is n£1 and X n is n£k. A new observation (YnÅ1,XnÅ1) becomes available. Prove that the OLS estimate computed using this additional observation is b¯nÅ1 Æ b¯n Å1 1ÅX0 nÅ1¡X 0 nX n¢¡1 XnÅ1¡X 0 nX n¢¡1 XnÅ1¡YnÅ1
Let D1 and D2 be defined as in the previous exercise.(a) In the OLS regression Y ÆD1b°1 ÅD2b°2 Å bu, show that b°1 is the sample mean of the dependent variable among the men of the sample (Y 1), and that b°2 is the sample mean among the women (Y 2).(b) Let X (n £k) be an additionalmatrix of
A dummy variable takes on only the values 0 and 1. It is used for categorical variables. Let D1 and D2 be vectors of 1’s and 0’s, with the i th element of D1 equaling 1 and that of D2 equaling 0 if the person is a man, and the reverse if the person is a woman. Suppose that there are n1 men and
Show that when X contains a constant, n¡1PniÆ1 b Yi Æ Y .
Show that if X Æ [X 1 X 2] and X 0 1X 2 Æ 0 then P Æ P1 ÅP2.
Show that trM Æ n ¡k.
Show that M is idempotent: MM ÆM.
Show that if X Æ [X 1 X 2] then PX 1 Æ X 1 and MX 1 Æ 0.
Let bY Æ X (X 0X )¡1X 0Y . Find the OLS coefficient from a regression of bY on X .
Let be be the OLS residual from a regression of Y on X . Find the OLS coefficient from a regression of be on X .
Let be be the OLS residual from a regression of Y on X Æ [X 1 X 2]. Find X 0 2be.
Using matrix algebra, show X 0be Æ 0.
Consider the OLS regression of the n £1 vector Y on the n £k matrix X . Consider an alternative set of regressors Z Æ XC, where C is a k £k non-singular matrix. Thus, each column of Z is a mixture of some of the columns of X . Compare the OLS estimates and residuals from the regression of Y on
Let Y be a random variable with ¹ Æ E[Y ] and ¾2 Æ var[Y ]. Define g¡y,¹,¾2¢ÆÃy ¡¹ ¡y ¡¹¢2¡¾2!.Let (b¹,b¾2) be the values such that g n(b¹,b¾2) Æ 0 where g n(m, s) Æ n¡1PniÆ1 g¡yi ,m, s¢. Show that b¹ andb¾2 are the sample mean and variance.
Take the homoskedastic model Y Æ X0 1¯1 ÅX0 2¯2 Åe E[e j X1,X2] Æ 0 E£e2 j X1,X2¤Æ ¾2 E[X2 j X1] Æ ¡X1.Assume ¡ 6Æ 0. Suppose the parameter ¯1 is of interest. We know that the exclusion of X2 creates omited variable bias in the projection coefficient on X2. It also changes the
Consider the short and long projections Y Æ X°1 Åe Y Æ X¯1 ÅX2¯2 Åu(a) Under what condition does °1 Æ ¯1?(b) Take the long projection is Y Æ Xµ1 ÅX3µ2 Åv. Is there a condition under which °1 Æ µ1?
Showing 1000 - 1100
of 4105
First
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Last
Step by Step Answers