New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
econometrics
Econometric Analysis An Applied Approach To Business And Economics 1st Edition Sharif Hossain - Solutions
Discuss the OLS and ML methods to estimate the AR, MA, and ARMA models.
Discuss the AIC, SBIC and HQIC criteria to select the AR, MA and ARMA models.
Explain what stylised shapes would be expected for the autocorrelation and partial autocorrelation functions for the following processes:(i) an AR(1), (ii) an AR(2), (iii) an MA(1), (iv) an MA(2), (v) an ARMA(1, 1), (vi) an ARMA(2, 1) , and (vii) an ARMA(2, 2).
You obtain the following estimates for an AR(2) model of stock returns data:t t-1 t-2 t y = 0.91y +0.72 y +????where t {???? } is a white noise process. By examining the characteristic equation, check whether the model is stationary.
For the stock market prices, a researcher might suggest the following three different types of models:Model 1: t t-1 t y = y ???? ???? , Model 2: t t-1 t y = 0.54y ???? ???? , and Model 3: t t-1 t y = 0.85???? +???? .(i) Write the name of the classes of these models.(ii) What would be the
What are the differences between AR and MA models?
Define an ARMA(p, q) process. Find the mean, variance and autocorrelation function for ARMA(p, q)processes.
How will you detect the order of AR and MA processes by plotting the sample autocorrelation and partial autocorrelation functions? Discuss with an example.
Discuss the invertibility condition of MA processes. Show that an MA process can be converted into an AR(????).
Define partial autocorrelation functions. Discuss the technique to derive the partial autocorrelation functions.
Define an MA(q) process. Find the mean, variance, and autocorrelation function for MA(q) processes.
Define an MA(2) process. Find the mean, variance, and autocorrelation function for MA(2) processes.
Define an MA(1) process. Find the mean, variance, and autocorrelation function for MA(1) processes.
Discuss the Box-Pierce (1970) Q-test and Ljung-Box (1979) Q-test to test for significance of higher-order autocorrelation.
Define correlogram. How will you detect whether a time-series variable is stationary using a correlogram?Discuss with an example.
Discuss the stationarity condition for AR(p) models. Explain with an example.
Define an AR(p) process. Find the mean, variance, and autocorrelation function for AR(p) processes.
Define an AR(2) process. Find the mean, variance, and autocorrelation function for AR(2) processes.
Define an AR(1) process. Find the mean, variance, and autocorrelation function for AR(1) processes.
Write the names of different econometric models that are widely applicable in time-series econometrics or in financial econometrics.
Define the random walk model with and without drift. Explain it with an example.
Define the following terms with an example of each:(i) Trend stationary process, (ii) Difference stationary process, (iii) White noise process, and (iv) Gaussian white noise process.
What kinds of variables are likely to be non-stationary? How can such variables be made stationary? Explain with an example.
Distinguish between stationarity and non-stationarity of a time-series variable with an example of each.
Write different uses of a time-series analysis.
Define time-series data, time-series variables and time-series econometrics with an example of each.
The logarithmic transformation of the Cobb-Douglas production function of Bangladesh is, t 0 1 t 2 t t ln(GDP ) = ???? +???? lnLF +???? ln(K )+???? , where t GDP = GDP (constant 2010 USD) at time t, t LF = Labor force at time t, t K = Capital investment at time t.The estimated results of the
The data given below are the values of output (OUT, in million $), the input labour (L) and input capital investment (K, in million $) of 26 firms in the year 2018.(i) Estimate the Cobb-Douglas production function.(ii) Test the hypothesis that the estimates are sensitive to sample size using the
Using the data of the GDP (constant 2010 US$), capital investment (constant 2010 US$), and labour force of the USA over a period of time.(i) Estimate the Cobb-Douglas production function.(ii) Test the structural break for the Cobb-Douglas production function using the Chow test.(iii) Test the
Discuss the RESET test for misspecification of the functional form.
Discuss the Harvey and Collier (1977) test for model stability.
Discuss the CUSUM and CUSUMSQ tests for model stability.
Discuss the Chow test for parameter structural change with an example.
Explain the meaning of model misspecification with an example.
Discuss the Akaike’s Information Criteria (AIC), Schwarz’s Bayesian Information Criteria (SBIC) and Hannan-Quinn Information Criteria (HQIC) to select the lagged values of AR, ARMA and ARIMA models.
Discuss the stepwise procedure to select the best regression equation with an example.
What are the advantages and disadvantages of the forward selection procedure?
Discuss the forward selection procedure to select a best regression equation with an example.
Write the advantages and disadvantages of the backward elimination procedure.
Discuss the backward elimination procedure to select the best regression equation with an example.
In all possible regression equations procedure, discuss the R2 criteria, residual mean square criteria and the Mallows p C statistic to select the best regression equation with an example of each.
Write different techniques to select the best regression equation.
Why do we need to select the best regression equation? Explain with an example.
Let us consider a multiple linear regression equation of the type:i 0 1 1i 2 2i 3 3i i y = ???? +????X +???? X +???? X +????What problems can happen if we apply the OLS method to estimate the parameters when 2 X and 3 X are linearly related to each other? Justify.
The results of the principal component analysis of the regression equation of carbon emissions on per capita energy use (EN), per capita electricity consumption (ELEC), per capita real GDP (PGDP), urbanisation (UR), and trade openness (OPN) of USA from the period 1970-2018 are given below:(i) Which
Justify whether the following statements are true or false. Explain:(i) In a multiple linear regression, the linear relationship among the regressors in the sample implies that the effects of change in an individual regressor cannot be obtained separately.(ii) The OLS estimates will be biased.(iii)
Let us consider a multiple linear regression equation of the type:i 0 1 1i 2 2i 3 3i i y = ???? +????X +???? X +???? X +????How will you apply the principal component procedure to estimate the parameters if there exists a multicollinearity problem?
Discuss the principal component analysis to solve the multicollinearity problem in a multiple linear regression equation.
What is meant by principal component? Explain with an example.
What are the advantages and disadvantages of ridge regression analysis?
Define the ridge trace. Explain how this method can be applied to obtain a ridge estimator.
Find the mean, variance-covariance matrix, and mean squared error of the ridge estimator R ˆ???? of ???? .
Discuss the ridge regression estimation technique in case of a non-orthogonal situation.
Write different estimation techniques that can be applied when a multicollinearity problem exists in a data.
Discuss the condition number technique for detecting the presence of multicollinearity problem in a data set.
Discuss Leamer’s method to detect the multicollinearity problem.
What are the advantages of the Farrar and Glauber tests?
Discuss the Farrar and Glauber tests for detecting the presence of multicollinearity problem in a data set.
Define the variance inflation factor (VIF). Explain how the variance inflation factor (VIF) can be applied to detect the multicollinearity problem.
Discuss Klein’s rule to detect the problem of multicollinearity in a data set.
In which situation do we have to consider the auxiliary regression equation to detect the presence of a multicollinearity problem? Discuss this technique to detect the presence of multicollinearity problem.
Discuss the pairwise correlation technique, determinant of (X????X) matrix, and correlation matrix to detect the presence of multicollinearity problem.
Write different techniques to detect the presence of multicollinearity problem in a data set.
Show that the maximum likelihood estimate of 2 ???? cannot be obtained if the multicollinearity is perfect.
Show that the OLS estimator ˆ????of ???? becomes too large in absolute value in the presence of multicollinearity.
Let us consider a multiple linear regression equation of the type:Y = ???? +????X +???? X +.......+???? X +????Let j ˆ????be the OLS estimate of ????j when the jth explanatory variable linearly depends on the remaining (k-1)explanatory variables, j0 ˆ????be the OLS estimate of ????j in case of
Show that due to a multicollinearity problem, some important variables can be dropped from the multiple linear regression equation.
Let us consider a multiple linear regression equation of the type: i 0 1 1i 2 2i i y = ???? +????X +???? X +???? .(i) Show that the OLS estimators of ????1 and ????2 will be undefined if the variables 1 X and 2 X are perfectly related.(ii) Show that the OLS estimators of ????1 and ????2 will be
What are the consequences of multicollinearity?
Write different sources of multicollinearity with an example of each.
Let us consider a multiple linear regression equation of the type: i 0 1 1i 2 2i i y = ???? +????X +???? X +???? . If the variable X1 is linearly related to the variable X2, then show that the parameters ????1 and ????2 cannot be estimated separately.
Explain the meaning of multicollinearity with an example.
Let, the output function of the UK be given by ???? ???? ????t t 0 t t GDP = GDP L K e , where t GDP is the real GDP at time t, t L is the labour force at time t, t K is the capital investment at time t, 0 GDP is the initial real GDP, and ????t is the random error term corresponding to the tth set
Describe in steps how you would obtain the feasible generalized least squares estimators of the regression equation t 0 1 1t k kt t Y = ???? +???? X +........+???? X +???? , with AR(p) autoregressive errors.
Describe in steps how you would obtain the feasible generalized least squares estimators of the regression equation t 0 1 t t Y = ???? +???? X +???? , with AR(2) autoregressive errors.
Describe in steps how you would obtain the feasible generalized least squares estimators of the regression equation t 0 1 t t Y = ???? +???? X +???? , with AR(1) autoregressive errors.
Explain what is meant by a feasible generalized least squares estimator. Discuss the following methods to estimate feasible generalized least squares estimators of a regression equation if the random error terms are autocorrelated with order 1.(i) Cochrane-Orcutt method(ii) Hildreth-Lu search
Explain why adding a lagged dependent variable and lagged independent variable(s) to the model eliminates the problem of the first-order autocorrelation. Give at least two reasons why this is not necessarily a preferred solution.5-31 : Let us consider a regression equation in a matrix form of the
The first six autocorrelation coefficients are estimated based on a sample of 57 observations and are given below:Lag 1 2 3 4 5 6 Autocorrelation coefficient 0.751 0.512 0.382 0.298 0.209 0.142 Test each of the individual correlation coefficients for significance and test the significance of all
Discuss the Box-Pierce Q-test statistic and Ljung-Box Q test for testing the higher-order autocorrelation problem.
The least squares residuals of a simple linear regression equation are given below:-2.81, 5.63, -4.25, -2.26, -3.37, 2.52 1.97, 3.86. -0.04, 0.08, -1.04, -0.37, -2.04, -1.59, 3.73 3.56, -2.59, -0.39, -2.15, 1.98, 2.24, -3.25, -1.56, 0.35, -1.25, 3.51, -0.89, -2.56, -3.22, 1.56 Test the presence of
Discuss the run test for testing an autocorrelation problem in a data set.
A least squares method based on 59 observations produces the following results:2 t t ˆy = 15.0627+0.324494x , R 0.9657 SE: 1.0649 0.0514???? ???? ???? ???? ????????2 t t-1 t-2 e ˆe 1.2442e 0.4658e , R 0.7825 SE: 0.1193 0.1190???? ???? ???? ???? ???? ???? ????????Test the hypothesis that the true
Discuss the Breusch-Godfrey LM test for the higher-order autocorrelation problem.
The least squares regression based on 45 observations produces the following results:y = 839.4854+0.3895x 0.19761x 0.22926x ; R 0.9957 SE: 395.8126 0.0366 0.0213 0.0176; DW = 1.3887 ???? ???? ???? ???? ???? ???? ???? ????????2 t t-1 e ˆe 0.3025e , R 0.1256 SE: 0.1472 ???? ???? ???? ???? ????
Discuss the Breusch-Godfrey LM test for the first-order autocorrelation problem
The least squares estimates based on a sample of 65 observations are given below:2 t t ˆy = 5.7456+0.0752x , R 0.9675 SE: 0.3298 0.0565, ????ˆ = 0.7473???? ???? ???? ???? ????????Test the hypothesis that the true disturbances are not autocorrelated.
Discuss the asymptotic test for testing the first-order autocorrelation problem.
The least squares regression based on 45 observations produces the following results:2 t t ˆy = 0.3256+1.2561x , R 0.9875 SE: 0.1256 0.2145, DW = 1.325???? ???? ???? ???? ????????Test the hypothesis that the true disturbances are not autocorrelated.
What are the shortcomings of the Durbin-Watson test statistic? Explain how you will tackle the problem of inconclusion of the Durbin-Watson test.
Let the Durbin-Watson test statistic be 0.96 and the sample size be 45 with three explanatory variables plus a constant term. Perform a test. What is your conclusion?
Explain what is meant by the “inconclusive region” of the Durbin-Watson test. Show it graphically.
What do you understand by the term autocorrelation? An econometrician suspects that the residuals of his considered model might be autocorrelated. Explain the different steps involved in testing this theory using the Durbin-Watson test.
Discuss the Von- Neumann ratio test for detecting the presence of the autocorrelation problem.
Discuss the graphical method to detect the presence of autocorrelation problem in a data.
Show that the least squares estimators will be biased or inconsistent, if the regression equation contains the lagged values of the dependent variable and the random error terms are autocorrelated.
Explain why the Student-t test, F-test and Chi-square test are not applicable to test the significance of the parameters if the random error terms are autocorrelated.
Show that the least squares estimates of the regression coefficients will be inefficient relative to GLS estimates if the random terms are autocorrelated.
Show that ˆ var(????) underestimates the true variance of ˆ????if the random error terms are autocorrelated.
Show that the BLUE property is not satisfied if the random error terms are autocorrelated.
Showing 2600 - 2700
of 4105
First
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
Last
Step by Step Answers