New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
econometric analysis
Econometric Analysis 7th edition William H. Greene - Solutions
Derive the partial effects for the tobit model with heteroscedasticity that is described in Section 19.3.5.a.
Continuing to use the data in Exercise 1, consider once again only the nonzero observations. Suppose that the sampling mechanism is as follows: y∗ and another normally distributed random variable z have population correlation 0.7. The two variables, y∗ and z, are sampled jointly. When z is
Using only the nonlimit observations, repeat Exercise 2 in the context of the truncated regression model. Estimate μ and σ by using the method of moments estimator outlined in Example 19.2. Compare your results with those in the previous exercises.
We now consider the tobit model that applies to the full data set.a. Formulate the log-likelihood for this very simple tobit model.b. Reformulate the log-likelihood in terms of θ = 1/σ and γ = μ/σ. Then derive the necessary conditions for maximizing the log-likelihood with respect to θ and
Consider estimation of a Poisson regression model for yi |xi. The data are truncated on the left—these are on-site observations at a recreasion site, so zeros do not appear in the data set. The data are censored on the right—any response greater than 5 is recorded as a 5. Construct the
For the zero-inflated Poisson (ZIP) model in Section 18.4.8, we derived the conditional mean function, E[yi |xi, wi] = (1 − Fi)λi.a. For the same model, now obtain Var[yi |xi, wi]. Then, obtain τi = Var[yi |xi, wi]/E[yi |xi, wi]. Does the zero inflation produce overdispersion? (That is, is the
We are interested in the ordered probit model. Our data consist of 250 observations, of which the responses are Using the preceding data, obtain maximum likelihood estimates of the unknown parameters of the model. y 0 1 2 3 4 n 50 40 45 80 35
In the panel data models estimated in Section 17.4, neither the logit nor the probit model provides a framework for applying a Hausman test to determine whether fixed or random effects is preferred. Explain.
Prove (17-29). In Lo = n[Pln P + (1 – P) In(1 – P)], %3D
A data set consists of n = n1 + n2 + n3 observations on y and x. For the first n1 observations, y = 1 and x = 1. For the next n2 observations, y = 0 and x = 1. For the last n3 observations, y = 0 and x = 0. Prove that neither (17-18) nor (17-20) has a solution.
The following hypothetical data give the participation rates in a particular type of recycling program and the number of trucks purchased for collection by 10 towns in a small mid-Atlantic state: The town of Eleven is contemplating initiating a recycling program but wishes to achieve a 95 percent
Construct the Lagrange multiplier statistic for testing the hypothesis that all the slopes (but not the constant term) equal zero in the binomial logit model. Prove that the Lagrange multiplier statistic is n R2 in the regression of (yi = p) on the x’s, where p is the sample proportion of 1s.
Given the data set estimate a probit model and test the hypothesis that x is not influential in determining the probability that y equals one. 1 0 0 1 1 0 0 1 1 9 2 5 4 6 7 3 5 2 6' 1
Suppose that a linear probability model is to be fit to a set of observations on a dependent variable y that takes values zero and one, and a single regressor x that varies continuously across observations. Obtain the exact expressions for the least squares slope in the regression in terms of the
Abinomial probability model is to be based on the following index function model: y?? = α + βd + ε, y = 1, if y?? > 0, y = 0 otherwise. The only regressor, d, is a dummy variable. The data consist of 100 observations that have the following: Obtain the maximum likelihood estimators of α
Suppose the distribution of yi| λ is Poisson, We will obtain a sample of observations, yi , . . . , yn. Suppose our prior for λ is the inverted gamma, which will imply a. Construct the likelihood function, p(y1, . . . , yn | λ).b. Construct the posterior density c. Prove that the Bayesian
Prove the result claimed in Example 4.7. Greene (1980a) considers estimation in a regression model with an asymmetrically distributed disturbance, where ε has the gamma distribution in Section B.4.5 [see (B-39)] and σ = ??P/λ is the standard deviation of the disturbance. In this model, the
Consider sampling from a multivariate normal distribution with mean vector μ = (μ1, μ2, . . . , μM) and covariance matrix σ2I. The log-likelihood function is Show that the maximum likelihood estimators of the parameters are μ̂ = y̅m, and Derive the second derivatives matrix and show
For random sampling from the classical regression model in (14-3), reparameterize the likelihood function in terms of η = 1/σ and δ = (1/σ)β. Find the maximum likelihood estimators of η and δ and obtain the asymptotic covariance matrix of the estimators of these parameters.
Show that the likelihood inequality in Theorem 14.3 holds for the normal distribution. Eo[(1/n) In L(@%)] > Eo[(1/n) In L(0)] for any e + 00 (including ô).
Show that the likelihood inequality in Theorem 14.3 holds for the Poisson distribution used in Section 14.3 by showing that E[(1/n) ln L(θ | y)] is uniquely maximized at θ = θ0.
Consider a bivariate distribution for x and y that is a function of two parameters, α and β. The joint density is f (x, y | α, β). We consider maximum likelihood estimation of the two parameters. The full information maximum likelihood estimator is the now familiar maximum likelihood estimator
The following data were generated by the Weibull distribution of Exercise 4: a. Obtain the maximum likelihood estimates of α and β, and estimate the asymptotic covariance matrix for the estimates. b. Carry out a Wald test of the hypothesis that β = 1. c. Obtain the maximum likelihood estimate
Suppose that x has theWeibull distribution a. Obtain the log-likelihood function for a random sample of n observations. b. Obtain the likelihood equations for maximum likelihood estimation of α and β. Note that the first provides an explicit solution for α in terms of the data and β. But,
Mixture distribution. Suppose that the joint distribution of the two random variables x and y is a. Find the maximum likelihood estimators of β and θ and their asymptotic joint distribution. b. Find the maximum likelihood estimator of θ/(β + θ) and its asymptotic distribution. c. Prove that
The following table presents a hypothetical panel of data: a. Estimate the groupwise heteroscedastic model of Section 9.7.2. Include an estimate of the asymptotic variance of the slope estimator. Use a two-step procedure, basing the FGLS estimator at the second step on residuals from the pooled
Suppose that in the groupwise heteroscedasticity model of Section 9.7.2, Xi is the same for all i.What is the generalized least squares estimator of β? How would you compute the estimator if it were necessary to estimate σ2i?
The model satisfies the groupwise heteroscedastic regression model of Section 9.7.2 All variables have zero means. The following sample second-moment matrix is obtained from a sample of 20 observations: a. Compute the two separate OLS estimates of β, their sampling variances, the estimates of
Two-way random effects model. We modify the random effects model by the addition of a time-specific disturbance. Thus, where Write out the full disturbance covariance matrix for a data set with n = 2 and T = 2. Ya = a + X,ß + ɛit + Ui + Vị,
A two-way fixed effects model. Suppose that the fixed effects model is modified to include a time-specific dummy variable as well as an individual-specific variable. Then yit= αi+ γt + xitβ + εit. At every observation, the individual- and time-specific dummy variables sum to 1, so there are
What are the probability limits of (1/n)LM, where LM is defined in (11-42) under the null hypothesis that σ2u = 0 and under the alternative that σ2u ≠ 0?
Unbalanced design for random effects. Suppose that the random effects model of Section 11.5 is to be estimated with a panel in which the groups have different numbers of observations. Let Ti be the number of observations in group i.a. Show that the pooled least squares estimator is unbiased and
Suppose that the fixed effects model is formulated with an overall constant term and n− 1 dummy variables (dropping, say, the last one). Investigate the effect that this supposition has on the set of dummy variable coefficients and on the least squares estimates of the slopes, compared to (11-3).
The following is a panel of data on investment (y) and profit (x) for n = 3 firms over T = 10 periods. a. Pool the data and compute the least squares regression coefficients of the model yit = α + βxit + εit. b. Estimate the fixed effects model of (11-13), and then test the hypothesis that the
Prove that an underidentified equation cannot be estimated by 2SLS.
Prove that Y';ej plim T =@j - QjY j.
For the modely1 = γ1y2 + β11x1 + β21x2 + ε1,y2 = γ2y1 + β32x3 + β42x4 + ε2,show that there are two restrictions on the reduced form coefficients. Describe a procedure for estimating the model while incorporating the restrictions.
The following model is specified:y1= γ1y2+ β11x1+ ε1,y2= γ2y1+ β22x2+ β32x3+ ε2. All variables are measured as deviations from their means. The sample of 25 observations produces the following matrix of sums of squares and cross products: a. Estimate the two equations by OLS. b. Estimate
Obtain the reduced form for the model in Exercise 8 under each of the assumptions made in parts a and in parts b1 and b9. Yı = Y1 Y2 + Bi1X1+ B21X2 + B31 X3 + €1, y2 = Y2yı + B12.X1 + B22x2 + B32X3 + 82.
Consider the following two-equation model:y1 = γ1y2 + β11x1 + β21x2 + β31x3 + ε1,y2 = γ2y1 + β12x1 + β22x2 + β32x3 + ε2.a. Verify that, as stated, neither equation is identified.b. Establish whether or not the following restrictions are sufficient to identify (or partially identify) the
For the modely1 = α1 + βx + ε1,y2 = α2 + ε2,y3 = α3 + ε3,assume that yi2 + yi3 = 1 at every observation. Prove that the sample covariance matrix of the least squares residuals from the three equations will be singular, thereby precluding computation of the FGLS estimator. How could you
Consider the systemy1 = α1 + βx + ε1,y2 = α2 + ε2.The disturbances are freely correlated. Prove that GLS applied to the system leads to the OLS estimates of α1 and α2 but to a mixture of the least squares slopes in the regressions of y1 and y2 on x as the estimator of β.What is the mixture?
Consider the two-equation systemy1 = β1x1 + ε1,y2 = β2x2 + β3x3 + ε2.Assume that the disturbance variances and covariance are known. Now suppose that the analyst of this model applies GLS but erroneously omits x3 from the second equation. What effect does this specification error have on the
Prove that in the modely1 = X1β1 + ε1,y2 = X2β2 + ε2,generalized least squares is equivalent to equation-by-equation ordinary least squares if X1 = X2. Does your result hold if it is also known that β1 = β2?
The modely1= β1x1+ ε1,y2= β2x2+ ε2 satisfies all the assumptions of the classical multivariate regression model. All variables have zero means. The following sample second-moment matrix is obtained from a sample of 20 observations: a. Compute the FGLS estimates of β1 and β2. b. Test the
Consider estimation of the following two-equation model:y1= β1+ ε1,y2= β2x + ε2. A sample of 50 observations produces the following moment matrix: a. Write the explicit formula for the GLS estimator of [β1, β2]. What is the asymptotic covariance matrix of the estimator? b. Derive the OLS
A sample of 100 observations produces the following sample data:y̅1 = 1, y̅2 = 2,y'1y1 = 150,y2y2 = 550,y1y2 = 260.The underlying bivariate regression model isy1 = μ + ε1,y2 = μ + ε2.a. Compute the OLS estimate of μ, and estimate the sampling variance of this estimator.b. Compute the FGLS
Derive the first order conditions for nonlinear least squares estimation of the parameters in (15-2). How would you estimate the asymptotic covariance matrix for your estimator of θ = (β, σ)?
TheWeibull population has survival function S(x) = λp exp(−(λx)p). How would you obtain a random sample of observations from a Weibull population? (The survival function equals one minus the cdf.)
The exponential distribution has density f (x) = θ exp(−θ x). How would you obtain a random sample of observations from an exponential population?
In random sampling from the exponential distribution f (x) = (1/θ)e−x/θ, x ≥ 0, θ > 0, find the maximum likelihood estimator of θ and obtain the asymptotic distribution of this estimator.
Assume that the distribution of x is f (x) = 1/θ, 0 ≤ x ≤ θ. In random sampling from this distribution, prove that the sample maximum is a consistent estimator of θ. Note that you can prove that the maximum is the maximum likelihood estimator of θ. But the usual properties do not apply
Compare the fully parametric and semiparametric approaches to estimation of a discrete choice model such as the multinomial logit model discussed in Chapter 17. What are the benefits and costs of the semiparametric approach?
We examined Ray Fair’s famous analysis (Journal of Political Economy, 1978) of a Psychology Today survey on extramarital affairs in Example 18.9 using a Poisson regression model. Although the dependent variable used in that study was a count, Fair (1978) used the tobit model as the platform for
Find the auto correlations and partial auto correlations for the MA(2) processεt = vt − θ1vt−1 − θ2vt−2.
Two samples of 50 observations each produce the following moment matrices. (In each case, X is a constant and one variable.) a. Compute the least squares regression coefficients and the residual variances s2 for each data set. Compute the R2 for each regression. b. Compute the OLS estimate of the
For the model in Exercise 9, suppose that ε is normally distributed, with mean zero and variance σ2[1 + (γ x)2]. Show that σ2 and γ2 can be consistently estimated by a regression of the least squares residuals on a constant and x2. Is this estimator efficient?
For the model in Exercise 9, what is the probability limit of s2= 1/n Σni=1(yi?? y̅)2? Note that s2is the least squares estimator of the residual variance. It is also n times the conventional estimator of the variance of the OLS estimator, How does this equation compare with the true value you
Suppose that the regression model is yi= μ + εi, where a. Given a sample of observations on yi and xi, what is the most efficient estimator of μ? What is its variance? b. What is the OLS estimator of μ, and what is the variance of the ordinary least squares estimator? c. Prove that the
Suppose that the regression model is y = μ + ε, where ε has a zero mean, constant variance, and equal correlation, ρ, across observations. Then Cov[εi, εj] = σ2ρ if i = j. Prove that the least squares estimator of μ is inconsistent. Find the characteristic roots of and show that
Suppose that y has the pdf f (y | x) = (1/x' β)e−y/(x'β), y > 0.Then E[y | x] = x'β and Var[y | x] = (x'β)2. For this model, prove that GLS and MLE are the same, even though this distribution involves the same parameters in the conditional mean function and the disturbance variance.
In the generalized regression model, suppose that Ω is known.a. What is the covariance matrix of the OLS and GLS estimators of β?b. What is the covariance matrix of the OLS residual vector e = y − Xb?c. What is the covariance matrix of the GLS residual vector ε̂ = y − Xβ̂?d. What is the
In the generalized regression model, if theKcolumns of X are characteristic vectors of Ω, then ordinary least squares and generalized least squares are identical. (The result is actually a bit broader; X may be any linear combination of exactly K characteristic vectors. This result is Kruskal’s
Finally, suppose that must be estimated, but that assumptions (9-16) and (9-17) are met by the estimator. What changes are required in the development of the previous problem?
Now suppose that the disturbances are not normally distributed, although is still known. Show that the limiting distribution of previous statistic is (1/J) times a chisquared variable with J degrees of freedom. Conclude that in the generalized regression model, the limiting distribution of the
This and the next two exercises are based on the test statistic usually used to test a set of J linear restrictions in the generalized regression model where β̂ is the GLS estimator. Show that if Ω is known, if the disturbances are normally distributed and if the null hypothesis, Rβ = q, is
What is the covariance matrix, Cov[β̂, β̂ −b], of the GLS estimator β̂ = (X'Ω-1 X)−1 X'Ω−1 y and the difference between it and the OLS estimator, b = (X'X)−1X'y? The result plays a pivotal role in the development of specification tests in Hausman (1978).
In the discussion of the instrumental variables estimator, we showed that the least squares estimator b is biased and inconsistent. Nonetheless, b does estimate something: plim b = θ = β + Q−1γ. Derive the asymptotic covariance matrix of b, and show that b is asymptotically normally
Consider the linear model yi = α + βxi + εi in which Cov[xi, εi] = γ = 0. Let z be an exogenous, relevant instrumental variable for this model. Assume, as well, that z is binary—it takes only values 1 and 0. Show the algebraic forms of the LS estimator and the IV estimator for both α and
At the end of Section 8.7, it is suggested that the OLS estimator could have a smaller mean squared error than the 2SLS estimator. Using (8-4), the results of Exercise 1, and Theorem 8.1, show that the result will be true if How can you verify that this is at least possible? The right-hand-side is
Derive the results in (8-20a) and (8-20b) for the measurement error model. Note the hint in footnote 4 in Section 8.5.1 that suggests you use result (A-66) when you need to invert [Q* + £uu] = [Q* + (oue1)(Oµe1)].
For the measurement error model in (8-14) and (8-15b), prove that when only x is measured with error, the squared correlation between y and x is less than that between y* and x*. (Note the assumption that y* = y.) Does the same hold true if y* is also measured with error?
In the discussion of the instrumental variable estimator, we showed that the least squares estimator, bLS, is biased and inconsistent. Nonetheless, bLS does estimate something—see (8-4). Derive the asymptotic covariance matrix of bLS and show that bLS is asymptotically normally distributed.
Verify the following differential equation, which applies to the Box??Cox transformation: Show that the limiting sequence for λ = 0 is These results can be used to great advantage in deriving the actual second derivatives of the log-likelihood function for the Box??Cox model. dix^) idi-x^)] 入
Describe how to obtain nonlinear least squares estimates of the parameters of the model y = αxβ + ε.
Show that the model of the alternative hypothesis in Example 5.7 can be written As such, it does appear that H0 is a restriction on H1. However, because there are an infinite number of constraints, this does not reduce the test to a standard test of restrictions. It does suggest the connections
The log likelihood function for the linear regression model with normally distributed disturbances is shown in Example 4.6. Show that at the maximum likelihood estimators of b for β and e'e/n for σ2, the log likelihood is an increasing function of R2 for the model.
Compare the mean squared errors of b1 and b1.2 in Section 4.7.2.
Suppose the true regression model is given by (4-8). The result in (4-10) shows that if either P1.2 is nonzero or β2 is nonzero, then regression of y on X1 alone produces a biased and inconsistent estimator of β1. Suppose the objective is to forecast y, not to estimate the parameters. Consider
Show that in the multiple regression of y on a constant, x1 and x2 while imposing the restriction β1 + β2 = 1 leads to the regression of y − x1 on a constant and x2 − x1.
Prove that under the hypothesis that Rβ = q, the estimator where J is the number of restrictions, is unbiased for σ2. (у — Хь,У (у — Хb,) п — К+J
Use the test statistic defined in Exercise 7 to test the hypothesis in Exercise 1. e'e, x² = X {Est. Var[2.]}¯'a, = (n – K) e'e
An alternative way to test the hypothesis Rβ ?? q = 0 is to use a Wald test of the hypothesis that λ?? = 0, where λ?? is defined in (5-23). Prove that the fraction in brackets is the ratio of two estimators of σ2. By virtue of (5-28) and the preceding discussion, we know that this ratio is
Prove the result that the R2 associated with a restricted least squares estimator is never larger than that associated with the unrestricted least squares estimator. Conclude that imposing restrictions never improves the fit of the regression.
Prove the result that the restricted least squares estimator never has a larger covariance matrix than the unrestricted least squares estimator.
The expression for the restricted coefficient vector in (5-23) may be written in the form b∗ =[I − CR]b + w, where w does not involve b. What is C? Show that the covariance matrix of the restricted least squares estimator isσ2(X'X)−1 − σ2(X'X)−1 R'[R(X'X)−1R']−1R(X'X)−1and
The regression model to be analyzed is y = X1β1 + X2β2 + ε, where X1 and X2 have K1 and K2 columns, respectively. The restriction is β2 = 0.a. Using (5-23), prove that the restricted estimator is simply [b1∗, 0], where b1∗ is the least squares coefficient vector in the regression of y on
Using the results in Exercise 1, test the hypothesis that the slope on x1is 0 by running the restricted regression and comparing the two sums of squared deviations. Test the hypothesis that the two slopes sum to 1. [29 0 X'X = 0 50 10 10 80
A multiple regression of y on a constant x1and x2produces the following results: ŷ = 4 + 0.4x1+ 0.9x2, R2= 8/60, e'e = 520, n = 29, Test the hypothesis that the two slopes sum to 1. [29 0 0 50 10 X'X = 10 80
Example 4.10 presents a regression model that is used to predict the auction prices of Monet paintings. The most expensive painting in the sample sold for $33.0135M (log = 17.3124). The height and width of this painting were 35” and 39.4”, respectively. Use these data and the model to form
In Section 4.7.3, we consider regressing y on a set of principal components, rather than the original data. For simplicity, assume that X does not contain a constant term, and that the K variables are measured in deviations from the means and are “standardized” by dividing by the respective
In (4-13), we find that when superfluous variables X2 are added to the regression of y on X1 the least squares coefficient estimator is an unbiased estimator of the true parameter vector, β = (β'1, 0')'. Show that in this long regression, e'e/(n−K1−K2) is also unbiased as estimator of σ2.
Consider a data set consisting of n observations, nc complete and nm incomplete, for which the dependent variable, yi, is missing. Data on the independent variables, xi, are complete for all n observations, Xc and Xm. We wish to use the data to estimate the parameters of the linear regression model
For the simple regression model yi = μ + εi, εi ∼ N[0, σ2], prove that the sample mean is consistent and asymptotically normally distributed. Now consider the alternative estimator μ̂ = Σi wi yi ,wi = i/(n(n+1)/2) = i/Σi i. Σi wi = 1. Prove that this is a consistent estimator of μ and
Let ei be the ith residual in the ordinary least squares regression of y on X in the classical regression model, and let εi be the corresponding true disturbance. Prove that plim(ei − εi) = 0.
For the classical normal regression model y = Xβ + ε with no constant term and K regressors, what is plim F[K, n − K] = plim R2/K / (1−R2)/(n−K), assuming that the true value of β is zero?
Prove that E[b'b] = β'β + σ2ΣKk=1(1/λk) where b is the ordinary least squares estimator and λk is a characteristic root of X'X.
For the classical normal regression model y = Xβ + ε with no constant term and K regressors, assuming that the true value of β is zero, what is the exact expected value of F[K, n − K] = (R2/K)/[(1 − R2)/(n − K)]?
Consider the multiple regression of y on K variables X and an additional variable z. Prove that under the assumptions A1 through A6 of the classical regression model, the true variance of the least squares estimator of the slopes on X is larger when z is included in the regression than when it is
The following sample moments for x = [1, x1, x2, x3] were computed from 100 observations produced using a random number generator: The true model underlying these data is y = x1 + x2 + x3 + ε. a. Compute the simple correlations among the regressors. b. Compute the ordinary least squares
Showing 500 - 600
of 618
1
2
3
4
5
6
7
Step by Step Answers