New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
legal research analysis
Time Series Analysis And Its Applications With R Examples 2nd Edition Robert H. Shumway, David S. Stoffer - Solutions
6.11 In §6.3, we discussed that it is possible to obtain a recursion for the gradient vector, −∂ lnLY (Θ)/∂Θ. Assume the model is given by (6.1)and (6.2) and At is a known design matrix that does not depend on Θ, in which case Property P6.1 applies. For the gradient vector, showwhere the
6.10 To explore the stability of the filter, consider a univariate state-space model. That is, for t = 1, 2, . . ., the observations are yt = xt +vt and the state equation is xt = φxt−1 + wt, where σw = σv = 1 and |φ| (b) Use the result of (a) to verify Pt−1 t approaches a limit (t→∞) P
6.9 Develop the EM algorithm for the model with inputs, (6.3) and (6.4).
6.8 Consider the model yt = xt + vt, where vt is Gaussian white noise with variance σ2 v, xt are independent Gaussian random variables with mean zero and var(xt) = rtσ2x with xt independent of vt, and r1, . . . , rn are known constants. Show that applying the EM algorithm to the problem of
6.7 Let yt represent the land-based global temperature series shown in Figure 6.2. The data file for this problem is HL.dat on the website.(a) Using regression, fit a third-degree polynomial in time to yt, that is, fit the model yt = β0 + β1t + β2t2 + β3t3 + t, where t is white noise. Do a
6.6 (a) Consider the univariate state-space model given by state conditions x0 = w0, xt = xt−1 + wt and observations yt = xt + vt, t = 1, 2, . . ., where wt and vt are independent, Gaussian, white noise processes with var(wt) = σ2w and var(vt) = σ2 v. Show the data follow an IMA(1,1) model,
6.5 Derivation of Property P6.2 Based on the Projection Theorem. Throughout this problem, we use the notation of Property P6.2 and of the Projection Theorem given in Appendix B, where H is L2. If Lk+1 =sp{y1, . . . ,yk+1}, and Vk+1 = sp{yk+1 − ykk+1}, for k = 0, 1, . . . , n − 1, where ykk+1 is
6.4 Suppose the vector z = (x, y), where x (p×1) and y (q ×1) are jointly distributed with mean vectors μx and μy and with covariance matrixConsider projecting x on M= sp{1, y}, say, x = b + By.(a) Show the orthogonality conditions can be written as E(x − b − By) = 0, E[(x − b − By)y ]
6.3 Simulate n = 100 observations from the following state-space model:xt = .8xt−1 + wt and yt = xt + vt where x0 ∼ N(0, 2.78), wt ∼ iid N(0, 1), and vt ∼ iid N(0, 1) are all mutually independent. Compute and plot the data, yt, the one-stepahead predictors, yt−1 t along with the root mean
6.2 Consider the state-space model presented in Example 6.3. Let xt−1 t =E(xt|yt−1, . . . , y1) and let Pt−1 t = E(xt − xt−1 t )2. The innovation sequence or residuals are t = yt − yt−1 t , where yt−1 t = E(yt|yt−1, . . . , y1).Find cov(s, t) in terms of xt−1 t and Pt−1 t
6.1 Consider a system process given by xt = −.9xt−2 + wt t = 1, . . . , n where x0 ∼ N(0, σ2 0), x−1 ∼ N(0, σ2 1), and wt is Gaussian white noise with variance σ2w. The system process is observed with noise, say, yt = xt + vt, where vt is Gaussian white noise with variance σ2 v.
5.13 Consider the data set containing quarterly U.S. unemployment, U.S. GNP, consumption, and government and private investment from 1948-III to 1988-II. The seasonal component has been removed from the data. Concentrating on unemployment (Ut), GNP (Gt), and consumption (Ct), fit a vector ARMA
5.12 Consider predicting the transformed flows It = logit from transformed precipitation values Pt =√pt using a transfer function model of the form(1 − B12)It = α(B)(1 − B12)Pt + nt, where we assume that seasonal differencing is a reasonable thing to do.The data are the 454 monthly values of
5.11 The file labeled clim-hyd has 454 months of measured values for the climatic variables air temperature, dew point, cloud cover, wind speed, precipitation (pt), and inflow (it), at Shasta Lake. We would like to look at possible relations between the weather factors and the inflow to Shasta
5.10 Consider the correlated regression model, defined in the text by (5.53), say, y = Zβ + x, where x has mean zero and covariance matrix Γ. In this case, we know that the weighted least squares estimator is (5.54), namely,βw = (ZΓ−1Z)−1ZΓ−1y.Now, a problem of interest in spatial series
5.9 Let St represent the monthly sales data listed in sales.dat (n = 150), and let Lt be the leading indicator listed in lead.dat. Fit the regression model ∇St = β0 + β1∇Lt−3 + xt, where xt is an ARMA process.
5.8 The sunspot data are plotted in Chapter 4, Figure 4.31. From a time plot of the data, discuss why is it reasonable to fit a threshold model to the data, and then fit a threshold model.
5.7 The 2×1 gradient vector, l(1)(α0, α1), given for an ARCH(1) model was displayed in (5.41). Verify (5.41) and then use the result to calculate the 2 × 2 Hessian matrix 1(2)(a0, a1)= (o 21/ -lx1 231 21/02
5.6 The stats package of R contains the daily closing prices of four major European stock indices; type help(EuStockMarkets) for details. Fit a GARCH model to the returns of these series and discuss your findings.(Note: The data set contains actual values, and not returns. Hence, the data must be
5.5 Investigate whether the growth rate of the monthly Oil Prices exhibit GARCH behavior. If so, fit an appropriate model to the growth rate.
5.4 Investigate whether the monthly returns of a stock dividend yield listed in the file sdyr.dat exhibit GARCH behavior. If so, fit an appropriate model to the returns. The data are monthly returns of a stock dividend yield from January 1947 through May 1993 and are taken from Hamilton and Lin
5.3 Compute the sample ACF of the absolute values of the NYSE returns displayed in Figure 1.4 up to lag 200 and comment on whether the ACF indicates long memory. Fit an ARFIMA model to the absolute values and comment.
5.2 The data in globtemp2.dat are annual global temperature deviations from 1880 to 2004 (there are three columns in the data file; work with the annual means and not the 5-year smoothed data). The data are an update to the Hansen–Lebedeff global temperature data displayed in Figure 1.2. The url
5.1 The data set labeled fracdiff.dat is n = 1000 simulated observations from a fractionally differenced ARIMA(1, 1, 0) model with φ = .75 and d = .4.(a) Plot of the data and comment.(b) Plot the ACF and PACF of the data and comment.(c) Estimate the parameters and test for the significance of the
4.43 For the zero-mean complex random vector z = xc − ixs, with cov(z) =Σ = C − iQ, with Σ= Σ∗, define w = 2Re(a∗z), where a = ac − ias is an arbitrary non-zero complex vector. Prove cov(w) = 2a∗Σa.Recall ∗ denotes the complex conjugate transpose.
4.42 Finish the proof of Theorem C.5.
4.41 Prove Lemma C.4.
4.40 Show that condition (4.41) implies (C.19) under the assumption that wt ∼ wn(0, σ2w).
4.39 Let wt be a Gaussian white noise series with variance σ2w. Prove that the results of Theorem C.4 hold without error for the DFT of wt.
4.38 Consider the two-dimensional linear filter given as the output (4.154).(a) Express the two-dimensional autocovariance function of the output, say, γy(h1, h2), in terms of an infinite sum involving the autocovariance function of xs and the filter coefficients as1,s2 .(b) Use the expression
4.37 Consider the same model as in the preceding problem.(a) Prove the optimal smoothed estimator of the form(c) Compare mean square error of the estimator in part (b) with that of the optimal finite estimator of the form xt = a1yt−1 + a2yt−2 when σ2 v = .053, σ2w = .172, and φ1 = .9. has I
4.36 Consider the model yt = xt + vt, where xt = φ1xt−1 + wt, such that vt is Gaussian white noise and independent of xt with var(vt) =σ2 v, and wt is Gaussian white noise and independent of vt, with var(wt) =σ2w, and |φ1| fy(w) = 021-01-2iw|2 |1-01-2iw|2 where 01 0 = 2 02 01 and
4.35 Consider the signal plus noise modelwhere the signal and noise series, xt and vt are both stationary with spectra fx(ω) and fv(ω), respectively. Assuming that xt and vt are independent of each other for all t, verify (4.142) and (4.143). Yt= bo W Bratr+ Ver T=-
4.34 Figure 4.33 contains 454 months of measured values for the climatic variables air temperature, dew point, cloud cover, wind speed, precipitation, and inflow at Shasta Lake in California. We would like to look at possible relations among the weather factors and between the weather factors and
4.33 Prove the squared coherence ρ2y·x(ω) = 1 for all ω whenthat is, when xt and yt can be related exactly by a linear filter. Yt= T=-
4.32 Consider the problem of approximating the filter outputfor t = M/2−1,M/2, . . . , n−M/2, where xt is available for t = 1, . . . , n andwith ωk = k/M. Prove by Yt= k=- Yi M |k|
4.31 Using Examples 4.20-4.22 as a guide, perform a dynamic Fourier analysis and wavelet analyses (dwt and waveshrink analysis) on the event of unknown origin that took place near the Russian nuclear test facility in Novaya Zemlya. State your conclusion about the nature of the event at Novaya
4.30 Repeat the wavelet analyses of Examples 4.21 and 4.22 on all earthquake and explosion series in the data file eq+exp.dat. Do the conclusions about the difference between earthquakes and explosions stated in Examples 4.21 and 4.22 still seem valid?
4.29 Repeat the dynamic Fourier analysis of Example 4.20 on the remaining seven earthquakes and seven explosions in the data file eq+exp.dat. Do the conclusions about the difference between earthquakes and explosions stated in the example still seem valid?
4.28 Suppose we wish to test the noise alone hypothesis H0 : xt = nt against the signal-plus-noise hypothesis H1 : xt = st + nt, where st and nt are uncorrelated zero-mean stationary processes with spectra fs(ω) and fn(ω). Suppose that we want the test over a band of L = 2m + 1 frequencies of the
4.27 Suppose a sample time series with n = 256 points is available from the first-order autoregressive model. Furthermore, suppose a sample spectrum computed with L = 3 yields the estimated value ¯ fx(1/8) = 2.25. Is this sample value consistent with σ2w= 1, φ = .5? Repeat using L = 11 if we
4.26 Fit an autoregressive spectral estimator to the Recruitment series and compare it to the results of Example 4.11.
4.25 Often, the periodicities in the sunspot series are investigated by fitting an autoregressive spectrum of sufficiently high order. The main periodicity is often stated to be in the neighborhood of 11 years. Fit an autoregressive spectral estimator to the sunspot data using a model selection
4.24 Suppose we are given a stationary zero-mean series xt with spectrum fx(ω) and then construct the derived series yt = ayt−1 + xt, t= ±1,±2, ... .(a) Show how the theoretical fy(ω) is related to fx(ω).(b) Plot the function that multiplies fx(ω) in part (a) for a = .1 and for a = .8. This
4.23 Suppose xt is a stationary series, and we apply two filtering operations in succession, say,and then(a) Show the spectrum of the output is fz(ω) = |A(ω)|2|B(ω)|2fx(ω), where A(ω) and B(ω) are the Fourier transforms of the filter sequences at and bt, respectively.(b) What would be the
4.22 Let xt = cos(2πωt), and consider the outputwhere |A(ω)| and φ(ω) are the amplitude and phase of the filter, respectively.Interpret the result in terms of the relationship between the input series, xt, and the output series, yt. Yt= akxt-k, k=- where lako. Show Yt |A(w) cos(2wt + (w)),
4.21 Determine the theoretical power spectrum of the series formed by combining the white noise series wt to form yt = wt−2 + 4wt−1 + 6wt + 4wt+1 + wt+2.Determine which frequencies are present by plotting the power spectrum.
4.20 Consider the bivariate time series records containing monthly U.S. production as measured monthly by the Federal Reserve Board Production Index and unemployment as given in Figure 3.22.(a) Compute the spectrum and the log spectrum for each series, and identify statistically significant peaks.
4.19 For the processes in Problem 4.18,(a) Compute the phase between xt and yt.(b) Simulate n = 1024 observations from xt and yt for φ = .9, σ2 = 1, and D = 1. Then estimate and plot the phase between the simulated series for the following values of L and comment:(i) L = 1, (ii) L = 3, (iii) L =
4.18 Consider two processes xt = wt and yt = φxt−D + vt where wt and vt are independent white noise processes with common variance σ2, φ is a constant, and D is a fixed integer delay.(a) Compute the coherency between xt and yt.(b) Simulate n = 1024 normal observations from xt and yt for φ =
4.17 Analyze the coherency between the temperature and salt data discussed in Problem 4.9. Discuss your findings.
4.16 Consider two time seriesformed from the white noise series wt with variance σ2w = 1.(a) Are xt and yt jointly stationary? Recall the cross-covariance function must also be a function only of the lag h and cannot depend on time.(b) Compute the spectra fy(ω) and fx(ω), and comment on the
4.15 Use Property P4.1 to verify (4.63). Then verify (4.66) and (4.67)
4.14 The periodic behavior of a time series induced by echoes can also be observed in the spectrum of the series; this fact can be seen from the results stated in Problem 4.6(a). Using the notation of that problem, suppose we observe xt = st + Ast−D + nt, which implies the spectra satisfy fx(ω)
4.13 Repeat Problem 4.9 using a nonparametric spectral estimation procedure.In addition to discussing your findings in detail, comment on your choice of a spectral estimate with regard to smoothing and tapering.
4.12 Repeat Problem 4.8 using a nonparametric spectral estimation procedure.In addition to discussing your findings in detail, comment on your choice of a spectral estimate with regard to smoothing and tapering
4.11 Prove the convolution property of the DFT, namely,for t = 1, 2, . . . , n, where dA(ωk) and dx(ωk) are the discrete Fourier transforms of at and xt, respectively, and we assume that xt = xt+n is periodic. n-1 ast-da(wk)dz(wk) exp{2wkt}, k=0
4.10 Let the observed series xt be composed of a periodic signal and noise so it can be written as xt = β1 cos(2πωkt) + β2 sin(2πωkt) + wt, where wt is a white noise process with variance σ2w. The frequency ωk is assumed to be known and of the form k in this problem. Suppose we consider
4.9 The levels of salt concentration known to have occurred over rows, corresponding to the average temperature levels for the soil science data considered in Figures 1.15 and 1.16, are shown in Figure 4.32. The data are in the file salt.dat, which consists of one column of 128 observations;the
4.8 Figure 4.31 shows the biyearly smoothed (12-month moving average)number of sunspots from June 1749 to December 1978 with n = 459 points that were taken twice per year. With Example 4.9 as a guide, perform a periodogram analysis of the sunspot data (the data are in the file sunspots.dat)
4.7 Suppose xt and yt are stationary zero-mean time series with xt independent of ys for all s and t. Consider the product series zt = xtyt.Prove the spectral density for zt can be written as z(w) = [ 1/2 fz (w1)y(v) dv. 1/2 -1/2 fr(w-v)fy(v)
4.6 In applications, we will often observe series containing a signal that has been delayed by some unknown time D, i.e., xt = st + Ast−D + nt, where st and nt are stationary and independent with zero means and spectral densities fs(ω) and fn(ω), respectively. The delayed signal is multiplied
4.5 A first-order autoregressive model is generated from the white noise series wt using the generating equations xt = φxt−1 + wt, where φ, for |φ| .(a) Show the power spectrum of xt is given by(b) Verify the autocovariance function of this process ish = 0,±1,±2, . . ., by showing that the
4.4 A time series was generated by first drawing the white noise series wt from a normal distribution with mean zero and variance one. The observed series xt was generated from xt = wt − θwt−1, t= 0,±1,±2, . . . , where θ is a parameter.(a) Derive the theoretical mean value and
4.3 Verify (4.5).
4.2 With reference to equations (4.2) and (4.3), let Z1 = U1 and Z2 = −U2 be independent, standard normal variables. Consider the polar coordinates of the point (Z1, Z2), that is, A2 = Z2 1 + Z2 2 and φ = tan−1(Z2/Z1).(a) Find the joint density of A2 and φ, and from the result, conclude that
4.1 Repeat the simulations and analyses in Examples 4.1 and 4.2 with the following changes:(a) Change the sample size to n = 128 and generate and plot the same series as in Example 4.1:xt1 = 2 cos(2πt 6/100) + 3 sin(2πt 6/100), xt2 = 4 cos(2πt 10/100) + 5 sin(2πt 10/100),xt3 = 6 cos(2πt
3.43 Prove Property P3.2.
3.42 Prove Theorem B.2.
3.41 Use Theorem B.2 and B.3 to verify (3.105).
3.40 Consider the series xt = wt − wt−1, where wt is a white noise process with mean zero and variance σ2w. Suppose we consider the problem of predicting xn+1, based on only x1, . . . , xn. Use the Projection Theorem to answer the questions below.(a) Show the best linear predictor is(b) Prove
3.39 Use the Projection Theorem to derive the Innovations Algorithm, Property P3.6, equations (3.68)-(3.70). Then, use Theorem B.2 to derive the m-step-ahead forecast results given in (3.71) and (3.72).
3.38 Suppose xt =p j=1 φjxt−j+wt, where φp = 0 and wt is white noise such that wt is uncorrelated with {xk;k p, the BLP of xn+1 on sp{xk, k ≤ n} is P In+1=0jxn+1-j. j=1
3.37 Fit an appropriate seasonal ARIMA model to the log-transformed Johnson and Johnson earnings series of Example 1.1. Use the estimated model to forecast the next 4 quarters.
3.36 Fit a seasonal ARIMA model of your choice to the U.S. Live Birth Series(birth.dat). Use the estimated model to forecast the next 12 months.
3.35 Fit a seasonal ARIMA model of your choice to the unemployment data displayed in Figure 3.22. Use the estimated model to forecast the next 12 months.
3.34 Sketch the ACF of the seasonal ARIMA(0, 1)×(1, 0)12 model with Φ = .8 and θ = .5.
3.33 Consider the ARIMA model xt = wt +Θwt−2.(a) Identify the model using the notation ARIMA(p,d, q) × (P, D,Q)s.(b) Show that the series is invertible for |Θ| (c) Develop equations for the m-step ahead forecast, +xn+m, and its variance based on the infinite past, xn, xn−1, . . . . We -
3.32 One of the series collected along with particulates, temperature, and mortality described in Example 2.2 is the sulfur dioxide series. Fit an ARIMA(p,d, q) model to the data, performing all of the necessary diagnostics.After deciding on an appropriate model, forecast the data into the future
3.31 The second column in the data file globtemp2.dat are annual global temperature deviations from 1880 to 2004. The data are an update to the Hansen-Lebedeff global temperature data and the url of the data source is in the file. Fit an ARIMA(p,d, q) model to the data, performing all of the
3.30 Using the gas price series described in Problem 2.9, fit an ARIMA(p,d, q)model to the data, performing all necessary diagnostics. Comment.
3.29 In Example 3.36, we presented the diagnostics for the MA(2) fit to the GNP growth rate series. Using that example as a guide, complete the diagnostics for the AR(1) fit.
3.28 For the logarithm of the glacial varve data, say, xt, presented in Example 3.31, use the first 100 observations and calculate the EWMA, +xtt+1, given in (3.134) for t = 1, . . . , 100, using λ = .25, .50, and .75, and plot the EWMAs and the data superimposed on each other. Comment on the
3.27 Verify that the IMA(1,1) model given in (3.131) can be inverted and written as (3.132).
3.26 Suppose yt = β0 + β1t + · · · + βqtq + xt, βq = 0, where xt is stationary. First, show that ∇kxt is stationary for any k =1, 2, . . . , and then show that ∇kyt is not stationary for k < q, but is stationary for k ≥ q.
3.25 Forecasting with estimated parameters: Let x1, x2, . . . , xn be a sample of size n from a causal AR(1) process, xt = φxt−1 + wt. Let φ be the Yule–Walker estimator of φ.(a) Show φ − φ = Op(n−1/2). See Appendix A for the definition of Op(·).(b) Let xnn+1 be the one-step-ahead
3.24 A problem of interest in the analysis of geophysical time series involves a simple model for observed data containing a signal and a reflected version of the signal with unknown amplification factor a and unknown time delay δ. For example, the depth of an earthquake is proportional to the
3.23 Consider the stationary series generated by xt = α + φxt−1 + wt + θwt−1, where E(xt) = μ, |θ| .(a) Determine the mean as a function of α for the above model. Find the autocovariance and ACF of the process xt, and show that the process is weakly stationary. Is the process strictly
3.22 Using Example 3.30 as your guide, find the Gauss–Newton procedure for estimating the autoregressive parameter, φ, from the AR(1) model, xt = φxt−1+wt, given data x1, . . . , xn. Does this procedure produce the unconditional or the conditional estimator? Hint: Write the model as wt(φ) =
3.21 Generate n = 50 observations from a Gaussian AR(1) model with φ = .99 and σw = 1. Using an estimation technique of your choice, compare the approximate asymptotic distribution of your estimate (the one you would use for inference) with the results of a bootstrap experiment (use B = 200).
3.20 Generate 10 realizations of length n = 200 of a series from an ARMA(1,1)model with φ1 = .90, θ1 = .2 and σ2 = .25. Fit the model by nonlinear least squares or maximum likelihood in each case and compare the estimators to the true values.
3.19 Generate n = 500 observations from the ARMA model given by xt = .9xt−1 + wt − .9wt−1, with wt ∼ iid N(0, 1). Plot the simulated data, compute the sample ACF and PACF of the simulated data, and fit an ARMA(1, 1) model to the data. What happened and how do you explain the results?
3.18 Suppose x1, . . . , xn are observations from an AR(1) process with μ = 0.(a) Show the backcasts can be written as xnt= φ1−tx1, for t ≤ 1.(b) In turn, show, for t ≤ 1, the backcasted errors are wt(φ) = xnt−φxnt−1 = φ1−t(1 − φ2)x1.(c) Use the result of (b) to show1 t=−∞
3.17 Let Mt represent the cardiovascular mortality series discussed in Chapter 2, Example 2.2. Fit an AR(2) model to the data using linear regression and using Yule–Walker.(a) Compare the parameter estimates obtained by the two methods.(b) Compare the estimated standard errors of the coefficients
3.16 Verify statement (3.78), that for a fixed sample size, the ARMA prediction errors are correlated.
3.15 Consider the ARMA(1,1) model discussed in Example 3.6, equation (3.26);that is, xt = .9xt−1 + .5wt−1 + wt. Show that truncated prediction as defined in (3.81) is equivalent to truncated prediction using the recursive formula (3.82).
3.14 For an AR(1) model, determine the general form of the m-step-ahead forecast xtt+m and show 1-2m E[(t+m+m)] =2- W 1-62
3.13 Suppose we wish to find a prediction function g(x) that minimizes MSE = E[(y − g(x))2], where x and y are jointly distributed random variables with density function f(x, y).(a) Show that MSE is minimized by the choice g(x) = E(y**x).Hint:(b) Apply the above result to the model y = x2 + z,
3.12 Suppose xt is stationary with zero mean and recall the definition of the PACF given by (3.49) and (3.50). That is, let.andbe the two residuals where {a1, . . . , ah−1} and {b1, . . . , bh−1} are chosen so that they minimize the mean-squared errorsThe PACF at lag h was defined as the
3.11 In the context of equation (3.56), show that, if γ(0) > 0 and γ(h) → 0 as h→∞, then Γn is positive definite.
Showing 100 - 200
of 1614
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last
Step by Step Answers