All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
financial modeling
Questions and Answers of
Financial Modeling
Consider the k-factor model given in Section 11.2.4.(a) Simulate a sample of 200 observations of 15-dimensional vectors with k = 3 factors assuming 2 j = 01 +002 j j = 1 12, h1 = 15, h2 = 05, h3
In the context of selection of number of factors of Section 11.2.6, show that the integrated likelihood function for the parameters of the k-factor model obtained by integrating out the latent factor
Consider the one-factor model given by Equation (11.1).(a) Simulate a sample of 200 observations of 12-dimensional vectors as suming 2 j = 01+001 j j =1 12, H =1, and 2 =036.(b) Implement the MCMC
Consider the k-factor model presented in Section 11.2.4. Derive the full conditional distributions for the unknown model quantities given in Sec tion 11.2.5.
Consider the k-factor model given by Equation (11.8). Show that if no constraints are imposed on B or xi then the model is not identi able.
Assume the one-factor model given by Equation (11.1). Derive the full conditional distributions for the unknown model quantities given in Sec tion 11.2.2.
VerifytheWishartevolutiondistributiontheoryofSection10.4.8inthe followinggeneralsetting.Theq qprecisionmatrix hastheWishartdistribution W(hA)for somedegreesof freedomh=n+q 1wheren>0sothath>q 1
Verify the optimal portfolio construction theory for one-step-ahead port folio decisions summarized in Section 10.4.7. That is, suppose that the predictive moments ft = E(yt Dt 1) and Vt = V(yt Dt 1)
Develop software to implement the forward ltering/sequential updat ing analysis, and the retrospective smoothing computations, for the ex changeable time series DLM with time-varying observational
Use the retrospective ltering analysis of Equation (10.12) to derive a retrospective recursive equation for computing the sequence of estimates E( t DT) over t = (T 1): 1
Simplify the retrospective ltering of Equation (10.12) for the case of a univariate time series, q = 1Describe how the retrospectively simulated values of past time-varying precisions now depend on a
Consider two independent q q Wishart matrices S1 W( 1A) and S2 W( 2A) where 1 = h and 2 = (1 )h for some h > q 1 and(01)What is the distribution of S = S1 +S2?Use Corollary 2 of Dawid (1981) to show
In the exchangeable time series model of Section 10.3, suppose that Gt = Ip for all t and that the Wt sequence is de ned via a single discount factor so Rt = Ct 1 for all t This models evolution of
Supposetheq vectoryt=(yt1 ytq) timeseries isVARq(1)with yt= yt 1+ tand t N(0V) independentlyovertime.(a)Takeanynonsingularq qmatrixAandconsiderthetransformation toxt=Ayt
Supposetheq vectortimeseriesyt followsaVARq(p)modelwith yt=p r=1 ryt r+ t t N(0V)withindependent innovations tovertime.(a) Showthat themodel canbewrittenasyt = Ft+ t for some
Two stationary, univariate AR(1) processes are driven by correlated in novation sequences. That is, we observe two processes yt and zt where yt= yt 1 + t t N(0v)zt = zt 1+ t t N(0w)where, as usual, t
Consider a simple VAR2(1) model for xt = (xt1 xt2) given by xt =Gxt 1+ t with t =( t1 t2) N(0W)Denotethe(i j) element of G by gij so the evolution equation element-by-element is xt1 = g11xt 11 +g12xt
Simulatethree-dimensionaldatayt fromthefollowingmodel:yt =1 1 0 1 0 1 1 1 0 xt+ t with t N(04I3)andxt=(x1t x2t x3t)suchthat x1t = 095x1t 1+ 1t x2t = 2 095 cos(2 18)x2t 1 0952x2t 2+ 2t x3t = 2 095
ShowthedecompositionresultsforVARmodelssummarizedinSection 9.1.3.
Sketch the MCMC algorithm for obtaining samples from the joint pos terior distribution in the dynamical lag/lead model presented in Section 8.3
Find the phase spectrum for the process in (8.10).
Show that if yt1 N(01) for all t and yt2 is given by yt2 = 1 3 (yt 11 +yt1 +yt+11)the coherency of the process yt = (yt1 yt2) is zero.
Consider the two-dimensional process yt = (yt1 yt2) de ned as yt1 = t1 yt2 = yt+d1 + t2 where ti are independent, mutually independent, zero-mean processes with ti N(01) Find the cross-spectrum, the
Show that the bivariate process de ned by (8.10) has the spectrum given in (8.11) and therefore the implied coherency of the process is one.
Show that the coherency can be written in terms of the amplitude and phase spectra, 12( ) and 12( ) as 12( )[f11( )f22( )]1 2 exp i 12( )
Show that the cospectrum and the quadrature spectrum de ned in (8.7)can be written as c12( ) = 1 2 h=12(h)cos(h )C12( ) =q12( ) = 1 2 h=12(h)sin(h )In addition, show that c12( ) = c21( ) and q12( ) =
Work out the details of the Gibbs sampling algorithm sketched in Sec tion 8.1 for obtaining samples from the posterior distribution of the parameters in model (8.1). In particular, show that sampling
Simulate data from the following model:yt = (1)yt 1+ t t=1:200 yt = (2)yt 1+ t t=201:400 where (1) = 09 (2) = 09 and t N(0v) with v = 1 Assuming priors of the form (1) TN(051R1) and (2) TN( 051R2)
A stationary, rst order Markov process is generated from p(xtxt 1)given by xt where N(xt xt 1 v) with probability N(xt0s)otherwise,
In the univariate SV model assuming the normal mixture form of p( t)what is the stationary marginal distribution p(yt v) for any t implied by this model? Investment analysis focuses, in part, on
An econometrician notes that the assumed normal mixture error distri bution used in the SV model analysis is just an approximation to the sampling distribution of the data, and is worried that the
In the SV model context, derive expressions for the parameters of the component distributions for the MCMC analysis of Section 7.5.4 includ ing the expressions for the sequences of conditional normal
Generate a reasonably large random sample from the 2 1 distribution and look at histograms of the implied values of t = log( t) 2 where 2t 1 as in the SV model. Explore the shape of this distribution
Suppose that xt 1 N(xt 10s) and xt is generated by xt = ztxt 1 +(1 zt) t where t N(0s) and zt is binary with Pr(zt = 1) = a and with xt 1 zt t being mutually independent. Here sa are known.(a) What
Consider the logistic DLM given by (see Storvik 2002)yt tBin(r logit( + t))N( t 1w)Simulate T = 300 observations from this model with = 09, w = 1 and = =05(a) Assume that and are known. Implement a
Consider the AR(1) plus noise model in Example 6.6.(a) Run Liu and Wests algorithm with di erent discount factors.(b) Implement the algorithm of Liu and West (2001) using an importance density that
Consider the fat-tailed nonlinear state-space model studied in Carvalho, Johannes, Lopes, and Polson (2010) and given by yt = t+v t t t=t 1(1 + 2 t 1) +w t where t N(01) t N(01)and t IG( 2 2)(a)
Consider the PACF TVAR(2) parameterization in Example 5.4. Simu late data x1:T from this model.(a) Sketch and implement a SMC algorithm for ltering and smoothing assuming that v and w are known.(b)
Consider the AR(1) state plus noise model discussed in Example 6.6.Implement the algorithm of Storvik (2002) for ltering with parameter estimation and compare it to other SMC algorithms such as that
Consider the dynamic trend model FGv( 1)W( 2 3) introduced by Harrison and Stevens (1976) and revisited in Fruhwirth-Schnatter(1994), where F=(10) G= 1 1 0 1 v( 1)= 1 and W( 2 3)=Gdiag( 2 3)G= 2+ 3 3
Consider again the AR(1) model with mixture observational errors de scribed in Example 4.10. Modify the MCMC algorithm in order to per form posterior inference when t has the following Markovian
Derive the conditional distributions for posterior MCMC simulation in Example 4.10, verifying that the algorithm outlined there is correct.
Consider the following model:yt = t+ t t N(0 2)p t=j t j + t t N(0 2)j=1 This model is a simpli ed version of that proposed in West (1997c).Develop the conditional distributions required to de ne an
Go to Google Trends and download the monthly data for the searches of the term time series in the U.S. and the rest of the world from January 2004 until December 2019. For each of these two time
Consider the observational variance discount model of Section 4.3.7. You may use the results from Problem 10.(a) Show that the time t 1 prior ( t 1 Dt 1) G(nt 1 2dt 1 2) com bined with the beta-gamma
The basic distribution theory in this question underlies the discount volatility model of Section 4.3.7 and the results to be shown below in Problem 11. Two positive scalar random quantities 0 and 1
Work through the key results of Section 4.3.5 to ensure understanding of the role of the Markovian structure of a DLM in retrospective analysis.Do this in a DLM which, for all time t, has known
Consider the three DLMs below, each with a 2 dimensional state vector t = ( t1 t2). Each model is de ned by the constant FG elements shown. For each of these DLMs, give details of the implied form of
A DLM has the forecast function de ned over k = 01 current time t by ft(k) = at1 + at2k + at3rk cos(2 k +ct)for some positive wavelength and some positive number r < 1, and where the quantities at1
A DLM for the univariate series yt is given by yt = F t + t where tN(0v), and t = Gt 1+ twhere t N(0vW)withtheusual conditional independence assumptions. All model parameters FvGW are known and
Consider a dynamic regression DLM for a univariate time series, namely yt = Ft t+ t with t N(0v) and v known. Suppose a random walk evolution for t so that G = I and t = t 1+ t and t N(0vWt)where Wt
For a univariate series yt consider the simple rst-order polynomial (lo cally constant) DLM with local level t at time t. The p = 1 dimensional state is t = t, while Ft = 1 and Gt = 1 for all t Also,
Show that the smoothing equations in (4.10) and (4.11) can be written(01] is used to 1we have the summary posterior ( t 1 Dt 1) N(mt 1Ct 1) and the state vector evolves through the state equation t =
Assuming a DLM structure given by FtGt vtWt , nd the distribu tions of ( t+k t+j Dt), (yt+k yt+j Dt), ( t+k yt+j Dt) (yt+k t+j Dt)and ( t k j t k Dt).
Let xt be a stationary AR(1) process given by xt = xt 1 + x t with N(0vx) Let yt = xt + y t with y t uncorrelated with the process xt and with y t N(0vy)(a) Find the spectrum of yt .(b) Simulate 500
Let yt = 1yt 1 + 2yt 2+ t with t N(01) Plot the spectra of yt in the following cases:(a) When the AR(2) characteristic polynomial has two real reciprocal roots given by r1 = 09 and r2 = 095(b) When
Show that the spectral densities of AR(1) and MA(1) processes are given by (3.13) and (3.14), respectively.
Consider the three time series of concentrations of luteinizing hormone in blood samples from Diggle (1990). Perform a Bayesian spectral analysis of such series based on the single-harmonic
Consider the Southern Oscillation Index series (soi.dat) shown in Fig ure 1.7 (a). Perform a Bayesian spectral analysis of such series.
Consider the UK gas consumption series ukgasconsumption.dat ana lyzed in Example 3.3. Analyze these series with models that include only the signi cant harmonics for p = 12. Compare the tted values
Show that the reference analysis of the model in (3.2) leads to the ex pressions of p( v y) p(y) and p( y) given in Section 3.1.1.
Suppose yt follows a stationary AR(1) process with AR parameter and innovation variance v with ( v) uncertain. At any time t write Dt for the past data and information, including all past
Twounivariatetimeseriesyt zt followthecoupleddynamicmodelsover t=1 givenby yt= yt 1+ zt+ t and zt= zt 1+ t where t N(0v) and t N(0u) are independent andmutually independent innovationssequences.
Thisquestionconcernsthealternativestate-spacerepresentationofan AR(p)model thatarisesasaspecialcaseofthestate-spacerepresenta tionofARMA(pq)modelswhenq=0.This isalsoeasilyseentobe de
Consider the detrended oxygen isotope data analyzed in Chapter 5 (see also Aguilar, Huerta, Prado, and West 1999).(a) Under AR models use the AIC/BIC criteria to obtain the model order p that is the
Consider an ARMA(11) process with AR parameter MA parameter and variance v.(a) Simulate 400 observations from a process with = 09 = 06 and v =1(b) Compute the conditional least squares estimates of
For the EEG data discussed in Section 2.3, perform the AR(8) Bayesian reference analysis as described there.(a) Draw histograms of the marginal posterior distributions of the model coe cients j for j
Consider the quarterly US macro-economic data in Figure 2.15. Let yt be the implied series of quarterly changes (i.e., di erence values of quarterly actual in ation levels). Fit the reference
Sample 500 observations from a stationary AR(4) process with two com plex pairs of conjugate reciprocal roots. More speci cally, assume that one of the complex pairs has modulus r1 = 09 and frequency
Figure 2.14 plots the monthly changes in the US S&P stock market index over 1965 to 2016. Consider an AR(1) model as a very simple exploratory model for understanding local dependencies but not for
Consider a process of the form yt = 2t+ t+05 t 1 iid tN(0v)(a) Find the ACF of this process.(b) Now de ne zt = yt yt 1+2What kind of process is this? Find its ACF.
Consider the AR(2) process yt = 1yt 1 + 2yt 2 + t with t N(0v)independent with 1 = 09 and 2 = 09 Is this process stable? If so write the process as an innite order MA process, yt = j=0 j t j Find j
Consider the innite order MA process de ned by yt = t+a( t 1+ t 2+ )where a is a constant and the ts are i.i.d. N(0v) random variables.(a) Show that yt is nonstationary.(b) Consider the series of rst
Let xt be an AR(p) process with characteristic polynomial x(u) and yt be an AR(q) process with characteristic polynomial y(u)What is the structure of the process zt with zt = xt + yt?
Consider a MA(2) process.(a) Find its ACF.(b) Use the innovations algorithm to obtain the one-step-ahead predictor and its mean square error.
Consider the ARMA(1,1) model described by yt = 095yt 1 +08 t 1+ t with t N(01) for all t(a) Show that the one-step-ahead truncated forecast is given by yt t+1 =095yt + 08 t twith t tcomputed
You observe yt = xt + t = 12 where xt follows a stationary AR(1) process with AR parameter and innovation variance v, i.e., xt = xt 1 + t with independent innovations t N(0v) Assume all parameters
Suppose you observe yt = xt + t where:xt follows a stationary AR(1) process with AR parameter and inno vation variance v, i.e., xt = xt 1 + t with independent innovations N(0v);The t are independent
Consider the AR(1) model given by(1 where t N(0v)where t N(0v)B)(yt) = t(a) Find the MLEs for and when =0 (b) Assume that v is known, = 0 and that the prior distribution for is U( 01) Find an
Show that Equations (2.38) and (2.39) hold by taking expected values in (2.36) and (2.37) with respect to the whole past history y t.
Find the ACF of a general ARMA(1,1) process.
Verify the ACF of a MA(q) process given in (2.33).
Show that a prior on the vector of AR(p) coe cients of the form N( 10w 1) and N( j j 1w j) for 1 < j p can be written as p( ) = N( 0A1w) where A = H H with H and de ned in Section 2.4.2.
Verify that the expressions for the conditional posterior distributions in Section 2.4.1 are correct.
Plot the corresponding forecast functions for the AR(2) processes con sidered in Example 2.1.
Show that if an AR(2) process has a pair of complex roots given by r exp( i ) they can be written in terms of the AR coe cients as r =2 and cos( ) = 1 2r.
Show that, when the characteristic roots are all di erent, the forecast function of an AR(p) process has the representation given in (2.8).
Show that the general solution of the homogeneous di erence Equation(2.9) has the form (2.10).
Consider the AR(2) series yt = 1yt 1 + 2yt 2 + t with t N(0v).Following Section 2.1.2, rewrite the model in the standard DLM form yt = Fxt and xt =Gxt 1+Ft where F= 1 0xt = yt yt 1 G= 1 2 10 We know
Show that the eigenvalues of the matrix G given by (2.7) correspond to the reciprocal roots of the AR(p) characteristic polynomial.
This question concerns a time series model for continuous and positive outcomes yt. Suppose a series xt follows a stationary AR(1) model with parameters v and the usual normal innovations. De ne a
Consider an AR(2) process with AR coe cients = ( 1 2).(a) Show that the process is stationary for parameter values lying in the region 1 < 2
Suppose yt follows a stationaryAR(1)model withARparameter and innovationvariance v. De ne x= (y1 yn).We knowthat x N(0s n)wheres=v (1 2) is themarginal varianceof the ytprocessandthecorrelationmatrix
Consider theAR(1)processyt= yt 1+ t with t N(0v) Show thattheprocessisnonstationarywhen = 1
ConsidertheAR(1)processyt= yt 1+ t with t N(0v) If
Sample T = 1000 observations from model (1.1) with d = 1 using your preferred choice of values for(i) and vi for i = 12 with (i) < 1 and (1)= (2) Assuming that d is known and using prior
Refer to the conjugate analysis of the AR(1) model in Example 1.7.Using the fact that ( yFv) N(mvC), nd the posterior mode of v via the EM algorithm.
Consider the following models:yt = 1yt 1+ 2yt 2+ t yt = acos(2 0t)+bsin(2 0t)+ t(1.26)(1.27)with t N(0v) and 0 >0 xed.(a) Sample T = 200 observations from each model using your favorite choice of the
Show that the distributions of ( yF) and (vyF) obtained for the AR(1) conjugate analysis using the conditional likelihood are those given in Example 1.7.
Show that the expressions for the distributions of (yFv) (vyF), and( yF)under the conjugate analysis in Section 1.5.3 are those given on page 24.
Showing 1 - 100
of 888
1
2
3
4
5
6
7
8
9