New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
regression analysis
Spatial Analysis 1st Edition John T. Kent, Kanti V. Mardia - Solutions
8.3 In the same setting as Exercise 8.2, suppose it is desired to predict a new signal S0 given observations Z = [Z1, ..., Zn]T. Here, it is assumed that[S0, ST]T are jointly multivariate normal with E(S0) = ????0, var(S0) = ????00 and cov(S0, Si) = ????0i, i = 1, ..., n. The best linear predictor
8.2 Let S ∼ Nn(????, Σ) be a multivariate normal latent “signal,” and, given S, consider independent Poisson distributed observations Zi|S = s ∼ P(????i), where log ????i = si, i = 1, ..., n. Show that the first two moments of the observations and signal are given by E(Zi) = exp (????i +1
8.1 Let Y ∼ Nn(????, Σ) follow a multivariate normal distribution and set Xi =exp(Yi), i = 1, ..., n. The purpose of this exercise is to find the moments of X. They are most easily calculated using the moment generating function for Y M(u) = E{exp (uTY)} = exp (uT???? +1 2uTΣu)as a function of
7.9 The ordinary kriging predictor and kriging variance for a stationary random field in Theorem 7.1 have been expressed in terms of the covariance function ????(h). These formulas can also be expressed in terms of the?
7.8 Let U(t) be a stationary process, t ∈ ℝ2 with a covariance function ????(h) and an unknown mean ????. Consider n = 3 sites lying on an equilateral triangle with coordinates⎡⎢⎢⎣t T1 tT 2t T3⎤⎥⎥⎦=⎡⎢⎢⎢⎣−√3 2 −1 20 1√3 2 −1 2⎤⎥⎥⎥⎦.The triangle is
7.7 Verify the formulas for Bayesian kriging prediction in Section 7.12. In particular, using the Woodbury formula for the inverse of a matrix(Ω + FΔFT)−1 = Ω−1 − Ω−1 FD−1 FTΩ−1, where D = Δ−1 + FTΩ−1 F(see Section A.3.6), show that the formula for ????Δ in the posterior
7.6 Exercise 7.5 can also be extended to unequally spaced time points t1 < ··· < tn. Show that in this case B is a tri-diagonal matrix with diagonal elements bii =⎧⎪⎨⎪⎩1 2(t2−t1), i = 1, 1 2(ti−ti−1) + 1 2(ti+1−ti), i = 2, ..., n − 1, 1 2(tn−tn−1), i = n, and with super-
7.5 Consider the linear intrinsic covariance function ????I(h)=−|h|, h ∈ ℝ in one dimension. Consider equally spaced sitesti = i, i = 1, ..., n. Hence, ΣI?
7.4 Let A be a symmetric positive definite n × n matrix and let b be an n-vector.Consider the minimization problem minimize xTAx such that bTx = 1, over x ∈ ℝn. Show that the solution is given by x = A−1b∕(bTA−1b).Hint: Using a Lagrange multiplier ????, minimize the unconstrained
7.3 (a) Let U(t) be a one-dimensional stationary process with an unknown mean ???? and with a covariance function ????(h) = ????2????(h), where ????(h) is a specified correlation function. Consider predicting the value of the
7.2 The easiest way to prove that M−1 has the form in (7.46) is by rotating to the starred coordinates in Section 7.6.2. Show that M∗ and the stated form for (M∗)−1 reduce to M∗ =⎡⎢⎢⎣Ω∗11 Ω∗12 F∗1Ω∗21 Ω∗22 0(F∗1 )T 0 0⎤⎥⎥⎦,(M∗)−1 =⎡⎢⎢⎣0 0 (F∗1
7.1 In the setting of ordinary kriging, show that the maximum likelihood estimator of ???? based on data x = [x1, ..., xn]T, also known as the generalized least squares estimator, is given by ????̂ in (7.27). Hence, show that the formula for the predictor û(t0) in (7.28) is the same as ????Tx
6.11 Consider the Matérn process restricted to the integer lattice t ∈ ℤd, and suppose data are observed on a rectangular region of size D ⊂ ℤd of size n1 ×···× nd. The purpose of this exercise is look at the outfill asymptotics problem and in particular to show that the elements of
6.10 Another circulant approximation to the covariance matrix for a circular CAR(1) model at n sites is given byΣcirc = ????2 circ(1, ????, ????2, ...,????m, ????m, ...,????2, ????), n odd, m = (n − 1)∕2,Σcirc = ????2 circ(1, ????, ????2, ...,????m−1, ????m, ????m−1, ...,????2, ????), n
6.9 Information matrix for MLE from circular CAR(1) process with nugget effect. Following on from Exercise 6.7, include a nugget effect in the CAR(1) model. Recall that a CAR(1) process with regression parameter???? and conditional variance ????2???? can also be viewed as an AR(1) process with
6.8 Information matrix for composite MLE from circular CAR(1) process.Consider again the circulant CAR(1) model of the previous exercise. The purpose of this exercise is to investigate the accuracy of the composite estimator of Section 6.7.The composite log-likelihood function is given by (6.64),
6.7 Information matrix for MLE from circular CAR(1) process. The simplest nontrivial stationary Gaussian process on the line is the AR(1) model or equivalently the CAR(1) model. The spectral density of the covariance function has two equivalent representations, given in (6.17), i.e.
6.6 Guyon (1982). This exercise looks in more detail at the unbiased sample covariance function used in Section 6.5. Suppose D is a rectangular lattice of length n in each direction so that it contains |D| = nd data sites. For simplicity, consider the lag h = [1 0 ... 0]representing one step along
6.5 A regression interpretation of the moment estimator for a CAR model. Let D be a finite domain in ℤd and let be a finite symmetric neighborhood of the origin, with half-neighborhood †. Let D = {t ∈ D ∶ t + s ∈ D for all s ∈ }, and let y denote the vector {xt ∶ t ∈
6.4 Consider n equally spaced data sites in one dimension from the exponential covariance function. The covariance matrix Σ was given in (6.21). Show that the inverse of Σ is given by Ψ(exact) in (6.22) by confirming that the matrix product reduces to the identity matrix, Σ Ψ(exact) = In.
6.3 (a) Using the ideas from Sections 4.8.2 and 4.6.1, show that the AR(1) and .CAR(1) models in Section 6.3 have the spectral densities given in (6.13)and (6.17).(b) Show that the spectral densities are the same as one another under the conditions in (6.18). Further show that ???? and ???? in
6.2 The quadratic form Q(W) in the Whittle log-likelihood for a CAR model is specified as a linear combination of terms involving the biased sample covariance function. Show that it can also be expressed in terms of the periodogram as in (6.45).Hint: Recall or prove the one-dimensional result for k
6.1 Given observations xt, t ∈ D, where D is a rectangular region in ℤd as in(6.4), let x denote the sample mean of the data and define the centered?
5.7 The purpose of this exercise is to show that the two forms of the REML log-likelihood (5.31) and (5.41) are the same, up to an additive constant not depending on the data or the parameters. Let y be an n-vector assumed to come from Nn(????????n, Σ), where Σ is a positive definite matrix,
5.6 (Marshall and Mardia, 1985) This exercise develops the principle of MINQUE for certain spatial processes for which the mean vanishes and the covariance function is linear in the unknown parameters.(a) let x ∼ Np(????, Ψ) where Ψ = ????2A + ????2I. Here, A is a known positivedefinite
5.5 Consider the setting of Section 5.9 with n = 3. Let X ∼ N3(0, Σ) and suppose the covariance matrix Σ satisfies ????11 = ????22 = ????33 = 1. In the notation of this section, show that the coefficient vectors ????T i = (????i1, ????i2, ????i3) are given by????11 = 1, ????12 = 0, ????13 =
5.4 The n × n Helmert matrix H, n ≥ 2, is an orthogonal matrix whose rows are defined as follows:● For j = 1, ..., n − 1, the j row is given by(1, ..., 1,−j, 0, ..., 0)∕√j(j + 1), where 1 is repeated j times and 0 is repeated n − j − 1 times.● The nth row is given by (1, ...,
5.3(a) Let G = [G1 G2] be an n × n orthogonal matrix partitioned into two blocks, with n1 and n2 columns, respectively, n1 + n2 = n, so that GT 1G1 = I,GT 2G2 = I,GT 1G2 = 0. Let B be an n × n positive definite matrix, and set Bij = GT i BGj, Bij = GT i B−1 Gj, i, j = 1, 2.Using the results
5.2 This exercise looks at the regularity of the Matérn covariance function for small lags. This behavior is important for the study of infill asymptotics in Section 5.14. Suppose the real index ???? is not a negative integer and let z be a positive number. The modified Bessel function of the
5.1 The spherical scheme is defined by the covariance function????(h; ????2,a) = {1 − 3 2|h∕a| + 1 2|h∕a|3, |h| ≤ a, 0, |h| > a, which is positive definite in dimensions d = 1, 2, 3. The parameters are the scale parameter ????2 and the range parametera. Clearly, ????(h) is smooth in ????2,
4.17 The purpose of this exercise is to show how a {0,1}-valued Markov mesh model can be recast as a Markov random field. For simplicity, restrict attention to the one-dimensional case and suppose that the joint distribution is built from the one-sided conditional distributions for t > 0,
4.16 Verify the conditional distribution for Xt|X∖t in Example 4.9.
4.15 For the auto-logistic model (4.83), show that log pt(1|x∖t)pt(0|x∖t) = ????t + ∑s∈(t)????st xt, and hence deduce (4.84).
4.14 The purpose of this exercise is to confirm that Eqs. (4.74)–(4.76) imply(4.81). Let xT be a possible value of the random field and define yT by yt = 0 and y∖t = x∖t. Writing pt(xt|x∖t) = p(xt|x∖t), show that p(xt|x∖t)p(yt|y∖t) =
4.13 The purpose of this exercise is to give some examples of cliques based on Figures 4.3 and 4.4. Let T = ℤ2 and consider the neighborhood structures generated by translates {(basic,1)t = t + (basic,1), t ∈ ℤ2} of the first-order basic neighborhood (basic,1) and by translates of
4.12 Verify the proof of Theorem 4.9.2 for the subset expansion of a negative potential function.
4.11 (Brook expansion (Brook, 1964)). Verify the expansion (4.73) and hence confirm that the full conditional probability functions in (4.72) determine the joint probability function pT(xT).
4.10 Consider the QAR model for a stationary Gaussian random field in Example 4.6. If c = −ab, show that the spectral density takes the form f(????) = ????2|1 − aei????[1] − bei????[2] + abei(????[1]+????[2])|−2= ????2|1 − aei????[1]|−2|1 − bei????[2]|−2.Hence, deduce that this
4.9 Consider random variables Xij, i, j ∈ ℤ, such that(a) For each i, the random variables {Xij, j ∈ ℤ} follow an AR(1) process with parameter ????, so that E{Xij|Xij′ ∶ j′ < j} = ????Xi,j−1.
4.8 Verify Eq. (4.68) to show that the three types of conditioning on the past are equivalent for a Gaussian QAR model, E[Xt|(Xt−s, s ∈ )] = ∑s∈asXt−s, E[Xt|(Xt−s, s ∈ )] = ∑s∈asXt−s, E[Xt|(Xt−s, s ∈ )] = ∑s∈asXt−s.Hint: For simplicity work in d = 2
4.7 Regarding ℤ2 ⊂ ℝ2 as part of a complex plane ℂ, fix an angle ???? and set = {t ∈ Z2 ∶ t ≠ 0, ???? ≤ arg t < ???? + ????}, where arg t is shorthand for arg(t[1] + it[2]). Note that arg t and ???? are angles; if they are treated as numbers in [0,2????), then the above angular
4.6 Nonuniqueness of the mean for a CAR model. Let {Xt} be a stationary AR(1) process in one dimension, with mean ????, autoregression parameter 0
4.5 In one dimension, consider a stationary CAR model (4.31), E[Xt|X∖t] = ∑S s = −S s ≠ 0????s Xt−s, var[Xt|X∖t] = ????2???? , where 1 − ????̃(????) = 1 − 2∑S s=1 ????s coss???? ≠ 0 for all ???? ∈ [−????, ????]. Show that this process can be given a unilateral
4.4 Consider a d-dimensional AR model for a stationary Gaussian random field{Xt} given by∑s∈K dsXt−s = ????t, t ∈ ℤd, where {????t} is white noise, and K ⊂ ℤd is a finite set. This framework includes both SARs (Section 4.5) and UARs (Section 4.8). Define a new sequence {as} with terms
4.3 In d = 1 dimension, consider the SAR model∑S s=−S dsXt−s = ????t, where d0 > 0, ds = d−s and {????t} is a white noise process. Assume that∑dsei????s ≠ 0 for all ???? ∈ [−????, ????).Show that it is possible to find a unilateral representation∑2S s=0 asXt−s = ????′t, in
4.2 Consider the following two autoregressions in one dimension:(i) 6Xt − 5Xt−1 + Xt−2 = ????t,(ii) 2Xt − 7Xt−1 + 3Xt−2 = ????t, where in each case {????t} is a white noise process with mean 0, variance 1.(a) Show that the spectral density for the two autoregressions is given by the
4.1 (Tower rule for conditional expectations). Let (U, V1, V2, ..., Vn) denote a collection of jointly distributed random variables. The notation E[U|????1, ...,????n] denote the expected value of U conditional on (V1, ..., Vn)taking the values (????1, ...,????n). The notation E[U|V1, ..., Vn] = W,
3.20 The purpose of this exercise is to show that negativity of the dispersion variance ????(V1|V2) in Exercise 3.19 cannot arise under the additional assumption that V2 can be partitioned as a union of copies of V1. In this case, verify the continuous analysis of variance identity in (3.63). If
3.19 The purpose of this exercise is to construct a counterexample showing that it is possible that ????(V1|V2) < 0 even when V1 ⊂ V2. Let X(t) = √2 cos(t +Φ) denote a random cosine wave in one dimension, t ∈ ℝ1, where Φ is uniformly distributed on [0,2????). This process is stationary
3.18 .(a) Show that the dispersion variance ????(0|V) as defined in Eq. (3.60) continues to make sense for an intrinsic random field with semivariogram????(h), and is given in this case by????(0|V) = |V|−2∫ ∫ ????(s − t) ds dt.To verify this formula, it is helpful to expand (3.60)
3.17 Let V be an open bounded region in ℝd. If {X(t)} is a stationary random field with covariance function ????(h), define the (continuous) sample mean within V by X(V) = |V|−1∫V X(t) dt.Show that its variance is given by var{X(V)} = |V|−2∫V ∫V????(s − t) ds dt.By expanding the
3.15 can be extended to other values of ????.(a) If g(h) is a function of h ∈ ℝd, the Laplacian is defined by Δg(h) =∑d????=1 ????2g(h)∕????h[????]2. If g(h) = g#(r), say, with |h| = r, is isotropic, show thatΔg(h)=(g#)′′(r) + d − 1 r(g#)′(r), where the dash denotes
3.16 The purpose of this exercise is to investigate the extent to which Exercise
3.10, it was noted that the corresponding process is self-similar for all values of ????. The purpose of this exercise is to confirm the representation (3.45) for 0
3.15 Consider the function f????,d(????) = |????|−d−2????, ???? ∈ ℝd, where ???? ∈ ℝ is a real parameter. Then f????,d can be viewed as the spectral density of an isotropic stochastic process on ℝd (ordinary intrinsic if ???? > 0, generalized stationary if ???? < 0 and generalized
3.14 Repeat Exercise 3.12 with the same test function ????(h), but this time using the de Wijsian generalized intrinsic covariance function????GI(h)=−log h, h > 0, where ????GI(h) is defined up to an additive constant.As before, define the regularized intrinsic covariance function ????I,????(h)
3.13 Repeat Exercise 3.12 with the same test function ????(h), but this time using the linear intrinsic covariance function ????I(h)=−|h|, where ????I(h)is defined up to an additive constant. As before, define the regularized intrinsic covariance function ????I,????(h) by (3.67).(a) Show
3.12 In one dimension, d = 1, consider a stationary Gaussian random field X(t)with the exponential covariance function????(h) = exp(−|h|).Consider the indicator test function????(u) = I[|u| ≤ 1∕2].(a) Show that ????(u) has the Fourier transform????̃(????) = {(2∕????)sin(????∕2), ???? ≠
3.11 Let D be a bounded region in ℝd and let ????(t) = I[t ∈ D] be an indicator function on D. Show that the Fourier transform ????̃(????) of ???? satisfies the following:(a) ????̃(????) is bounded for all ????.(b) ∫ |????̃(????)|2d???? < ∞.Hint: (a) |????̃(????)| = | ∫
3.10 The purpose of this exercise is to confirm that the registered intrinsic random field XR(t) in (3.20) has the covariance function ????R(s, t) in (3.21).
3.9 Show that k, the space of homogeneous polynomials of degree k in ℝd, and k, the space of all polynomials of degree ≤ k in ℝd have dimensions pH(k) = dim(k) = (k + d − 1 k), pF(k) = dim(k) = (k + d k), as stated in (3.16).Hint: This exercise can be done using a simple
3.8 .(a) If ????(s, t) is an ordinary continuous positive semidefinite function, show that????G(????, ????) = ∫ ????(s) ????(t) ????(s, t) ds dt defines a positive semidefinite generalized bilinear functional.(b) If {X(t)} is an ordinary random field with covariance function ????(s, t), show that
3.7 If g(h) satisfies the integral representation (3.11) with A = 0, show that g(h)∕|h|2 → 0 as |h| → ∞.
3.6 If f(s)is an arbitrary finite real-valued function ofs ∈ ℝd, show that ????(s, t) =f(s)f(t) is positive semidefinite.Hint: For coefficients a1, ..., an and sites t1, ..., tn note the identity∑ ai aj f(ti)f(tj)={∑ ai f(ti)}2 ≥ 0.
3.5 Let {XI(t) ∶ t ∈ ℝd} be an intrinsic random field (of intrinsic order k = 0)with semivariogram ????(h). Given a fixed vector h0 ∈ ℝd, define a new random field by Y(t) = XI(t) − XI(t − h0).Using (3.4) show that {Y(t)} is stationary and that its covariance function takes the form
3.4 If a semivariogram ????(h) is replaced by ????c(h) = ????(h) + c for any constant c ∈ ℝ, show that formulae (3.4) and (3.5) remain valid.
3.3 If −????(h) is a conditionally positive semidefinite function with ????(0) = 0, show that ????(s, t) = ????(s) + ????(t) − ????(t − s) is positive semidefinite. Further show that ????(s, t) represents the covariance function of an intrinsic random field with semivariogram ????(h).Hint:
3.2 Verify the formula var {∑n i=1 ai XI(ti)}= −∑n i,j=1 ai aj????(ti − tj)for an intrinsic random field {XI(t)} of order 0 with semivariogram ????(h), where a(n × 1) is an increment vector of order 0, i.e. ∑ ai = 0.Hint: Write ∑ ai XI(ti) = ∑ ai[XI(ti) − XI(t0)] for any other site
3.1 Let ????(h), h ∈ ℝd, be a valid semivariogram and suppose it tends to a finite limit for large lags, ????(h) → c as |h| → ∞. Show that????(h) = c − ????(h)defines a valid covariance function ????(h) with marginal variance????(0) = c.Hint: Let {XI(t)} denote an intrinsic process with
4.17 The purpose of this exercise is to show how a {0,1}-valued Markov mesh model can be recast as a Markov random field. For simplicity, restrict attention to the one-dimensional case and suppose that the joint distribution is built from the one-sided conditional distributions for t > 0,
4.16 Verify the conditional distribution for Xt|X∖t in Example 4.9.
4.15 For the auto-logistic model (4.83), show that log pt(1|x∖t)pt(0|x∖t) = ????t + ∑s∈(t)????st xt, and hence deduce (4.84).
4.14 The purpose of this exercise is to confirm that Eqs. (4.74)–(4.76) imply(4.81). Let xT be a possible value of the random field and define yT by yt = 0 and y∖t = x∖t. Writing pt(xt|x∖t) = p(xt|x∖t), show that p(xt|x∖t)p(yt|y∖t) =
4.13 The purpose of this exercise is to give some examples of cliques based on Figures 4.3 and 4.4. Let T = ℤ2 and consider the neighborhood structures generated by translates {(basic,1)t = t + (basic,1), t ∈ ℤ2} of the first-order basic neighborhood (basic,1) and by translates of
4.12 Verify the proof of Theorem 4.9.2 for the subset expansion of a negative potential function.
4.11 (Brook expansion (Brook, 1964)). Verify the expansion (4.73) and hence confirm that the full conditional probability functions in (4.72) determine the joint probability function pT(xT).
4.10 Consider the QAR model for a stationary Gaussian random field in Example 4.6. If c = −ab, show that the spectral density takes the form f(????) = ????2|1 − aei????[1] − bei????[2] + abei(????[1]+????[2])|−2= ????2|1 − aei????[1]|−2|1 − bei????[2]|−2.Hence, deduce that this
4.9 Consider random variables Xij, i, j ∈ ℤ, such that(a) For each i, the random variables {Xij, j ∈ ℤ} follow an AR(1) process with parameter ????, so that E{Xij|Xij′ ∶ j′ < j} = ????Xi,j−1.
4.8 Verify Eq. (4.68) to show that the three types of conditioning on the past are equivalent for a Gaussian QAR model, E[Xt|(Xt−s, s ∈ )] = ∑s∈asXt−s, E[Xt|(Xt−s, s ∈ )] = ∑s∈asXt−s, E[Xt|(Xt−s, s ∈ )] = ∑s∈asXt−s.Hint: For simplicity work in d = 2
4.7 Regarding ℤ2 ⊂ ℝ2 as part of a complex plane ℂ, fix an angle ???? and set = {t ∈ Z2 ∶ t ≠ 0, ???? ≤ arg t < ???? + ????}, where arg t is shorthand for arg(t[1] + it[2]). Note that arg t and ???? are angles; if they are treated as numbers in [0,2????), then the above angular
4.6 Nonuniqueness of the mean for a CAR model. Let {Xt} be a stationary AR(1) process in one dimension, with mean ????, autoregression parameter 0
4.5 In one dimension, consider a stationary CAR model (4.31), E[Xt|X∖t] = ∑S s = −S s ≠ 0????s Xt−s, var[Xt|X∖t] = ????2???? , where 1 − ????̃(????) = 1 − 2∑S s=1 ????s coss???? ≠ 0 for all ???? ∈ [−????, ????]. Show that this process can be given a unilateral
4.4 Consider a d-dimensional AR model for a stationary Gaussian random field{Xt} given by∑s∈K dsXt−s = ????t, t ∈ ℤd, where {????t} is white noise, and K ⊂ ℤd is a finite set. This framework includes both SARs (Section 4.5) and UARs (Section 4.8). Define a new sequence {as} with terms
4.3 In d = 1 dimension, consider the SAR model∑S s=−S dsXt−s = ????t, where d0 > 0, ds = d−s and {????t} is a white noise process. Assume that∑dsei????s ≠ 0 for all ???? ∈ [−????, ????).Show that it is possible to find a unilateral representation∑2S s=0 asXt−s = ????′t, in
4.2 Consider the following two autoregressions in one dimension:(i) 6Xt − 5Xt−1 + Xt−2 = ????t,(ii) 2Xt − 7Xt−1 + 3Xt−2 = ????t, where in each case {????t} is a white noise process with mean 0, variance 1.(a) Show that the spectral density for the two autoregressions is given by the
4.1 (Tower rule for conditional expectations). Let (U, V1, V2, ..., Vn) denote a collection of jointly distributed random variables. The notation E[U|????1, ...,????n] denote the expected value of U conditional on (V1, ..., Vn)taking the values (????1, ...,????n). The notation E[U|V1, ..., Vn] = W,
3.20 The purpose of this exercise is to show that negativity of the dispersion variance ????(V1|V2) in Exercise 3.19 cannot arise under the additional assumption that V2 can be partitioned as a union of copies of V1. In this case, verify the continuous analysis of variance identity in (3.63). If
3.19 The purpose of this exercise is to construct a counterexample showing that it is possible that ????(V1|V2) < 0 even when V1 ⊂ V2. Let X(t) = √2 cos(t +Φ) denote a random cosine wave in one dimension, t ∈ ℝ1, where Φ is uniformly distributed on [0,2????). This process is stationary
3.18 .(a) Show that the dispersion variance ????(0|V) as defined in Eq. (3.60) continues to make sense for an intrinsic random field with semivariogram????(h), and is given in this case by????(0|V) = |V|−2∫ ∫ ????(s − t) ds dt.To verify this formula, it is helpful to expand (3.60)
3.17 Let V be an open bounded region in ℝd. If {X(t)} is a stationary random field with covariance function ????(h), define the (continuous) sample mean within V by X(V) = |V|−1∫V X(t) dt.Show that its variance is given by var{X(V)} = |V|−2∫V ∫V????(s − t) ds dt.By expanding the
3.15 can be extended to other values of ????.(a) If g(h) is a function of h ∈ ℝd, the Laplacian is defined by Δg(h) =∑d????=1 ????2g(h)∕????h[????]2. If g(h) = g#(r), say, with |h| = r, is isotropic, show thatΔg(h)=(g#)′′(r) + d − 1 r(g#)′(r), where the dash denotes
3.16 The purpose of this exercise is to investigate the extent to which Exercise
3.10, it was noted that the corresponding process is self-similar for all values of ????. The purpose of this exercise is to confirm the representation (3.45) for 0
3.15 Consider the function f????,d(????) = |????|−d−2????, ???? ∈ ℝd, where ???? ∈ ℝ is a real parameter. Then f????,d can be viewed as the spectral density of an isotropic stochastic process on ℝd (ordinary intrinsic if ???? > 0, generalized stationary if ???? < 0 and generalized
3.14 Repeat Exercise 3.12 with the same test function ????(h), but this time using the de Wijsian generalized intrinsic covariance function????GI(h)=−log h, h > 0, where ????GI(h) is defined up to an additive constant.As before, define the regularized intrinsic covariance function ????I,????(h)
3.13 Repeat Exercise 3.12 with the same test function ????(h), but this time using the linear intrinsic covariance function ????I(h)=−|h|, where ????I(h)is defined up to an additive constant. As before, define the regularized intrinsic covariance function ????I,????(h) by (3.67).(a) Show
3.12 In one dimension, d = 1, consider a stationary Gaussian random field X(t)with the exponential covariance function????(h) = exp(−|h|).Consider the indicator test function????(u) = I[|u| ≤ 1∕2].(a) Show that ????(u) has the Fourier transform????̃(????) = {(2∕????)sin(????∕2), ???? ≠
3.11 Let D be a bounded region in ℝd and let ????(t) = I[t ∈ D] be an indicator function on D. Show that the Fourier transform ????̃(????) of ???? satisfies the following:(a) ????̃(????) is bounded for all ????.(b) ∫ |????̃(????)|2d???? < ∞.Hint: (a) |????̃(????)| = | ∫
3.10 The purpose of this exercise is to confirm that the registered intrinsic random field XR(t) in (3.20) has the covariance function ????R(s, t) in (3.21).
3.9 Show that k, the space of homogeneous polynomials of degree k in ℝd, and k, the space of all polynomials of degree ≤ k in ℝd have dimensions pH(k) = dim(k) = (k + d − 1 k), pF(k) = dim(k) = (k + d k), as stated in (3.16).Hint: This exercise can be done using a simple
Showing 1100 - 1200
of 2175
First
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Last
Step by Step Answers