All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Ask a Question
Search
Search
Sign In
Register
study help
business
econometric analysis
Questions and Answers of
Econometric Analysis
Prove that an underidentified equation cannot be estimated by 2SLS.
Prove that Y';ej plim T =@j - QjY j.
For the modely1 = γ1y2 + β11x1 + β21x2 + ε1,y2 = γ2y1 + β32x3 + β42x4 + ε2,show that there are two restrictions on the reduced form coefficients. Describe a procedure for estimating the model
The following model is specified:y1= γ1y2+ β11x1+ ε1,y2= γ2y1+ β22x2+ β32x3+ ε2. All variables are measured as deviations from their means. The sample of 25 observations produces the following
Obtain the reduced form for the model in Exercise 8 under each of the assumptions made in parts a and in parts b1 and b9. Yı = Y1 Y2 + Bi1X1+ B21X2 + B31 X3 + €1, y2 = Y2yı + B12.X1 + B22x2 +
Consider the following two-equation model:y1 = γ1y2 + β11x1 + β21x2 + β31x3 + ε1,y2 = γ2y1 + β12x1 + β22x2 + β32x3 + ε2.a. Verify that, as stated, neither equation is identified.b.
For the modely1 = α1 + βx + ε1,y2 = α2 + ε2,y3 = α3 + ε3,assume that yi2 + yi3 = 1 at every observation. Prove that the sample covariance matrix of the least squares residuals from the three
Consider the systemy1 = α1 + βx + ε1,y2 = α2 + ε2.The disturbances are freely correlated. Prove that GLS applied to the system leads to the OLS estimates of α1 and α2 but to a mixture of the
Consider the two-equation systemy1 = β1x1 + ε1,y2 = β2x2 + β3x3 + ε2.Assume that the disturbance variances and covariance are known. Now suppose that the analyst of this model applies GLS but
Prove that in the modely1 = X1β1 + ε1,y2 = X2β2 + ε2,generalized least squares is equivalent to equation-by-equation ordinary least squares if X1 = X2. Does your result hold if it is also known
The modely1= β1x1+ ε1,y2= β2x2+ ε2 satisfies all the assumptions of the classical multivariate regression model. All variables have zero means. The following sample second-moment matrix is
Consider estimation of the following two-equation model:y1= β1+ ε1,y2= β2x + ε2. A sample of 50 observations produces the following moment matrix: a. Write the explicit formula for the GLS
A sample of 100 observations produces the following sample data:y̅1 = 1, y̅2 = 2,y'1y1 = 150,y2y2 = 550,y1y2 = 260.The underlying bivariate regression model isy1 = μ + ε1,y2 = μ + ε2.a.
Derive the first order conditions for nonlinear least squares estimation of the parameters in (15-2). How would you estimate the asymptotic covariance matrix for your estimator of θ = (β, σ)?
TheWeibull population has survival function S(x) = λp exp(−(λx)p). How would you obtain a random sample of observations from a Weibull population? (The survival function equals one minus the cdf.)
The exponential distribution has density f (x) = θ exp(−θ x). How would you obtain a random sample of observations from an exponential population?
In random sampling from the exponential distribution f (x) = (1/θ)e−x/θ, x ≥ 0, θ > 0, find the maximum likelihood estimator of θ and obtain the asymptotic distribution of this estimator.
Assume that the distribution of x is f (x) = 1/θ, 0 ≤ x ≤ θ. In random sampling from this distribution, prove that the sample maximum is a consistent estimator of θ. Note that you can prove
Compare the fully parametric and semiparametric approaches to estimation of a discrete choice model such as the multinomial logit model discussed in Chapter 17. What are the benefits and costs of the
We examined Ray Fair’s famous analysis (Journal of Political Economy, 1978) of a Psychology Today survey on extramarital affairs in Example 18.9 using a Poisson regression model. Although the
Find the auto correlations and partial auto correlations for the MA(2) processεt = vt − θ1vt−1 − θ2vt−2.
Two samples of 50 observations each produce the following moment matrices. (In each case, X is a constant and one variable.) a. Compute the least squares regression coefficients and the residual
For the model in Exercise 9, suppose that ε is normally distributed, with mean zero and variance σ2[1 + (γ x)2]. Show that σ2 and γ2 can be consistently estimated by a regression of the least
For the model in Exercise 9, what is the probability limit of s2= 1/n Σni=1(yi?? y̅)2? Note that s2is the least squares estimator of the residual variance. It is also n times the conventional
Suppose that the regression model is yi= μ + εi, where a. Given a sample of observations on yi and xi, what is the most efficient estimator of μ? What is its variance? b. What is the OLS
Suppose that the regression model is y = μ + ε, where ε has a zero mean, constant variance, and equal correlation, ρ, across observations. Then Cov[εi, εj] = σ2ρ if i = j. Prove that the
Suppose that y has the pdf f (y | x) = (1/x' β)e−y/(x'β), y > 0.Then E[y | x] = x'β and Var[y | x] = (x'β)2. For this model, prove that GLS and MLE are the same, even though this
In the generalized regression model, suppose that Ω is known.a. What is the covariance matrix of the OLS and GLS estimators of β?b. What is the covariance matrix of the OLS residual vector e = y
In the generalized regression model, if theKcolumns of X are characteristic vectors of Ω, then ordinary least squares and generalized least squares are identical. (The result is actually a bit
Finally, suppose that must be estimated, but that assumptions (9-16) and (9-17) are met by the estimator. What changes are required in the development of the previous problem?
Now suppose that the disturbances are not normally distributed, although is still known. Show that the limiting distribution of previous statistic is (1/J) times a chisquared variable with J degrees
This and the next two exercises are based on the test statistic usually used to test a set of J linear restrictions in the generalized regression model where β̂ is the GLS estimator. Show that if
What is the covariance matrix, Cov[β̂, β̂ −b], of the GLS estimator β̂ = (X'Ω-1 X)−1 X'Ω−1 y and the difference between it and the OLS estimator, b = (X'X)−1X'y? The result
In the discussion of the instrumental variables estimator, we showed that the least squares estimator b is biased and inconsistent. Nonetheless, b does estimate something: plim b = θ = β + Q−1γ.
Consider the linear model yi = α + βxi + εi in which Cov[xi, εi] = γ = 0. Let z be an exogenous, relevant instrumental variable for this model. Assume, as well, that z is binary—it takes only
At the end of Section 8.7, it is suggested that the OLS estimator could have a smaller mean squared error than the 2SLS estimator. Using (8-4), the results of Exercise 1, and Theorem 8.1, show that
Derive the results in (8-20a) and (8-20b) for the measurement error model. Note the hint in footnote 4 in Section 8.5.1 that suggests you use result (A-66) when you need to invert [Q* + £uu] = [Q*
For the measurement error model in (8-14) and (8-15b), prove that when only x is measured with error, the squared correlation between y and x is less than that between y* and x*. (Note the assumption
In the discussion of the instrumental variable estimator, we showed that the least squares estimator, bLS, is biased and inconsistent. Nonetheless, bLS does estimate something—see (8-4). Derive the
Verify the following differential equation, which applies to the Box??Cox transformation: Show that the limiting sequence for λ = 0 is These results can be used to great advantage in deriving the
Describe how to obtain nonlinear least squares estimates of the parameters of the model y = αxβ + ε.
Show that the model of the alternative hypothesis in Example 5.7 can be written As such, it does appear that H0 is a restriction on H1. However, because there are an infinite number of constraints,
The log likelihood function for the linear regression model with normally distributed disturbances is shown in Example 4.6. Show that at the maximum likelihood estimators of b for β and e'e/n for
Compare the mean squared errors of b1 and b1.2 in Section 4.7.2.
Suppose the true regression model is given by (4-8). The result in (4-10) shows that if either P1.2 is nonzero or β2 is nonzero, then regression of y on X1 alone produces a biased and inconsistent
Show that in the multiple regression of y on a constant, x1 and x2 while imposing the restriction β1 + β2 = 1 leads to the regression of y − x1 on a constant and x2 − x1.
Prove that under the hypothesis that Rβ = q, the estimator where J is the number of restrictions, is unbiased for σ2. (у — Хь,У (у — Хb,) п — К+J
Use the test statistic defined in Exercise 7 to test the hypothesis in Exercise 1. e'e, x² = X {Est. Var[2.]}¯'a, = (n – K) e'e
An alternative way to test the hypothesis Rβ ?? q = 0 is to use a Wald test of the hypothesis that λ?? = 0, where λ?? is defined in (5-23). Prove that the fraction in brackets is the ratio of two
Prove the result that the R2 associated with a restricted least squares estimator is never larger than that associated with the unrestricted least squares estimator. Conclude that imposing
Prove the result that the restricted least squares estimator never has a larger covariance matrix than the unrestricted least squares estimator.
The expression for the restricted coefficient vector in (5-23) may be written in the form b∗ =[I − CR]b + w, where w does not involve b. What is C? Show that the covariance matrix of the
The regression model to be analyzed is y = X1β1 + X2β2 + ε, where X1 and X2 have K1 and K2 columns, respectively. The restriction is β2 = 0.a. Using (5-23), prove that the restricted estimator is
Using the results in Exercise 1, test the hypothesis that the slope on x1is 0 by running the restricted regression and comparing the two sums of squared deviations. Test the hypothesis that the two
A multiple regression of y on a constant x1and x2produces the following results: ŷ = 4 + 0.4x1+ 0.9x2, R2= 8/60, e'e = 520, n = 29, Test the hypothesis that the two slopes sum to 1. [29 0 0 50 10
Example 4.10 presents a regression model that is used to predict the auction prices of Monet paintings. The most expensive painting in the sample sold for $33.0135M (log = 17.3124). The height and
In Section 4.7.3, we consider regressing y on a set of principal components, rather than the original data. For simplicity, assume that X does not contain a constant term, and that the K variables
In (4-13), we find that when superfluous variables X2 are added to the regression of y on X1 the least squares coefficient estimator is an unbiased estimator of the true parameter vector, β = (β'1,
Consider a data set consisting of n observations, nc complete and nm incomplete, for which the dependent variable, yi, is missing. Data on the independent variables, xi, are complete for all n
For the simple regression model yi = μ + εi, εi ∼ N[0, σ2], prove that the sample mean is consistent and asymptotically normally distributed. Now consider the alternative estimator μ̂ = Σi
Let ei be the ith residual in the ordinary least squares regression of y on X in the classical regression model, and let εi be the corresponding true disturbance. Prove that plim(ei − εi) = 0.
For the classical normal regression model y = Xβ + ε with no constant term and K regressors, what is plim F[K, n − K] = plim R2/K / (1−R2)/(n−K), assuming that the true value of β is zero?
Prove that E[b'b] = β'β + σ2ΣKk=1(1/λk) where b is the ordinary least squares estimator and λk is a characteristic root of X'X.
For the classical normal regression model y = Xβ + ε with no constant term and K regressors, assuming that the true value of β is zero, what is the exact expected value of F[K, n − K] =
Consider the multiple regression of y on K variables X and an additional variable z. Prove that under the assumptions A1 through A6 of the classical regression model, the true variance of the least
The following sample moments for x = [1, x1, x2, x3] were computed from 100 observations produced using a random number generator: The true model underlying these data is y = x1 + x2 + x3 + ε. a.
Prove that the least squares intercept estimator in the classical regression model is the minimum variance linear unbiased estimator.
As a profit-maximizing monopolist, you face the demand curve Q = α+βP+ε. In the past, you have set the following prices and sold the accompanying quantities: Suppose that your marginal cost is
Suppose that the regression model is yi = α + βxi + εi, where the disturbances εi have f (εi) = (1/λ) exp(−εi /λ), εi ≥ 0. This model is rather peculiar in that all the disturbances are
Suppose that the classical regression model applies but that the true value of the constant is zero. Compare the variance of the least squares slope estimator computed without a constant term with
Consider the simple regression yi= βxi+ εiwhere E[ε | x] = 0 and E[ε2| x] = σ2 a. What is the minimum mean squared error linear estimator of β? b. For the estimator in part a, show that ratio
Suppose that you have two independent unbiased estimators of the same parameter θ̂, say θ̂1 and θ̂2, with different variances v1 and v2. What linear combination θ̂ = c1 θ̂1 + c2 θ̂2 is
In the December 1969, American Economic Review (pp. 886–896), Nathaniel Leff reports the following least squares regression results for a cross section study of the effect of age composition on
Using the matrices of sums of squares and cross products immediately preceding Section 3.2.3, compute the coefficients in the multiple regression of real investment on a constant, real GNP and the
Three variables, N, D, and Y, all have zero means and unit variances. A fourth variable is C = N+ D. In the regression of C on Y, the slope is 0.8. In the regression of C on N, the slope is 0.5. In
Suppose that you estimate a multiple regression first with, then without, a constant. Whether the R2 is higher in the second case than the first will depend in part on how it is computed. Using the
Adata set consists of n observations on Xnand yn. The least squares estimator based on these n observations is bn= (X'nXn)??1X'nyn. Another observation, xsand ys, becomes available. Prove that the
Prove that the adjusted R2 in (3-30) rises (falls) when variable xk is deleted from the regression if the square of the t ratio on xk in the multiple regression is less (greater) than 1.
Let Y denote total expenditure on consumer durables, nondurables, and services and Ed, En, and Es are the expenditures on the three categories. As defined, Y = Ed + En + Es. Now, consider the
A common strategy for handling a case in which an observation is missing data for one or more variables is to fill those missing variables with 0s and add a variable to the model that takes the value
What is the result of the matrix product M1M where M1 is defined in (3-19) and M is defined in (3-14)?
In the least squares regression of y on a constant and X, to compute the regression coefficients on X, we can first transform y to deviations from the mean y̅ and, likewise, transform each column of
Suppose that b is the least squares coefficient vector in the regression of y on X and that c is any other K × 1 vector. Prove that the difference in the two sums of squared residuals is(y − Xc)'
For the regression model y = α + βx + ε,a. Show that the least squares normal equations imply Σi ei = 0 and Σi xi ei = 0.b. Show that the solution for the constant term is a = y̅ − bx̅.c.
Showing 100 - 200
of 184
1
2