New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
statistics alive
Tensor Methods In Statistics Monographs On Statistics And Applied Probability 1st Edition Peter McCullagh - Solutions
In the notation of Section 4.7.2, show that, when third- and higherorder cumulants are neglected, the cubes of the least squares residuals have covariance matrix cov(R iR jR k, R lRmR n) given byκ3 2 {ρi,jρk,lρm,n[9] + ρi,lρj,mρk,n[6]}, here taken to be of order n 3 × n 3. Show that, if ν
Deduce from the previous exercise that if the vector having components ρi,i lies in the column space of the model matrix X, then l3 ≡k3. More generally, prove that if the constant vector lies in the column space of X, then n1/2 (l3 − k3) = Op (n−1)for large n under suitably mild limiting
Suppose n = 4, p = 1, y = (1.2,0.5,1.3,2.7), x = (0,1,2,3). Using the results in the previous two exercises, show that k2 = 0.5700, k3 = 0.8389, with variance 6.59σ6 under normality, and l3 = 0.8390 with variance 4.55σ6 under normality. Compare with Anscombe (1961, p.15) and Pukelsheim(1980,
Repeat the calculations of Exercise 4.22, but now for l4 and l22, the optimal unbiased estimates of the fourth cumulant and the square of the second cumulant. Show that l4 and l22 are linear combinations of Deduce that, in the quadratically balanced case, k4 and k22 are the optimal unbiased
Let Y have components Y isatisfying E (Y i) = μi = ωα;iβαor, in matrix notation, E(Y) = Xβ, where X is n × q of rank q. Let ωi,j,ωi,j,k,… be given tensors such that Y i − μi has cumulants κ2ωi,j, κ3,ωi,j,k and so on. The first-order interaction matrix, X*, is obtained by appending
Justify the claims made in Section 4.7.1 that interaction and replication are not invariant under the general linear group (4.25). Show that these concepts are preserved under the permutation group.
In the notation of Section 4.7.2, let be the fitted value and the residual respectively. Define the derived statistics Show that Show also, under the usual normal theory assumptions, that conditionally on the fitted values,
A population of size N = N0 + N1 comprises N1 unit values and N0 zero values. Show that the population Ks are Hence derive the first four cumulants of the central hypergeometric distribution (Barton, David & Fix, 1960).
By considering the sample covariance matrix of the observed variables and their pairwise products, show that krs,tu − k rs,ik tu,jki,j,
Using the multiplication formulae given in the previous two exercises, derive the finite population joint cumulants listed in Table 4.3.
Find the mean value of the following expressions:∑i Yr i Y si Y ti, ∑i≠j Yr i Y sj Y tj, and ∑i≠j≠k Yr i Y sj Y tk.Hence find the symmetric functions that are unbiased estimates of κrst κrκst and κrκsκt. By combining these estimates in the appropriate way, find the symmetric
Show that the expressions for ordinary k-statistics in terms of power sums are Kaplan (1952).
Show that∑ij(ϕij)2 = n 2/ (n − 1).Hence derive the following joint cumulants of k-statistics:Show explicitly that the term κr,t,uκs,v does not appear in the third cumulant, κk(r∣s∣t,u,v).
By considering separately the five distinct index patterns, show that
Using the pattern functions given in Table 4.1, derive the following joint cumulants:Kaplan (1952). Compare these formulae with those given in Table 4.5 for finite populations.
Show that joint cumulants of ordinary k-statistics in which the partition contains a unit block, e.g. κk(r∣s,t∣u, υ,w), can be found from the expression in which the unit block is deleted, by dividing by n and adding the extra index at all possible positions. Hence, using the expression given
Show that as n → ∞, every array of k-statistics of fixed order has a limiting joint normal distribution when suitably standardized. Hence show in particular, that, for large n,
Under the assumptions of the previous exercise, show that n cov (k i,j,k, k r,s,t) → κi,rκj,sκk,t[3!]as n → ∞. Hence show that, for large n,
By considering the symmetric group, i.e. the group comprising all n ×n permutation matrices, acting on Y1,…,Yn, show that every invariant polynomial function of degree k is expressible as a linear combination of polykays of degree k.
Show that n∑i,j=1 Yr i Y sj = nk rs + n (n − 1)k(r)(s).Hence deduce that, under simple random sampling, the average value over all samples of n −2∑n i,j=1 Y r i Y s j is n −1Krs + (1 − n −1)K(r)(s) = n −1Kr,s + K(r)(s)while the same function calculated in the population is N −2 N
Derive the following multiplication formulae for k-statistics and polykays
Derive the following multiplication formulae for k-statistics and polykays
In a population consisting of the first N natural numbers, show that the population k-statistics are
Let X be a normal random variable with mean vector λr and covariance matrix λr,s. Define hr = h r (x; λ), h rs (x; λ),…to be the Hermite tensors based on the same normal distribution, i.e., and so on as in (5.7). Show that the random variables hr (X), h rs (X), h rst (X),…have zero mean and
Show that, for each θ in Ξ, exp (θix i − K (θ))fX (x)is a distribution on χ. Find its cumulant generating function and the Legendre transformation.
Show that the Legendre transformation of Y = X1 +… + Xn is nK*(y/n), where the Xs are i.i.d. with Legendre transformation K*(x).
From the entropy limit (6.6), deduce the law of large numbers.
Prove the following extension of inequality (6.4) for vectorvalued X pr(X ∈ A) ≤ exp {I (A)}where I(A) = supx∈A{−K*(x)}.
Prove that K*(x) ≥ 0, with equality only if x i= κi.
Prove directly that Κrs(ξ) is positive definite for each ξ in Ξ. Hence deduce that K*(x) is a convex function on χ.
By using Holder’s inequality, show for any 0 ≤ λ ≤ 1, that K (λξ1 + (1 − λ)ξ2) ≤ λK (ξ1) + (1 − λ)K (ξ2)proving that Κ(ξ) is a convex function.
Show that the array M ij (ξ) = E {XiXj exp (ξrx r)}is positive definite for each ξ. Hence deduce that the function Μ(ξ) is convex.Under what conditions is the inequality strict?
Beginning with the canonical coordinate system introduced at (8.17), transform from θ to ϕ with componentsϕr = Kr (θ) + νr,s,tuθsθtθu/2 + νr,s,tuθsνt,u/2.Show that, although E(Ur; θ) ≠ ϕr, nevertheless ϕˆr = Ur. Show also that the observed information determinant with respect to
Let Xr, Xrs, Xrst,… be a sequence of arrays of arbitrary random variables.Such a sequence will be called triangular. Let the joint moments and cumulants be denoted as in Section 7.2.1 by and so on. Now write μ[…] and κ[…] for the sum over all partitions of the subscripts as follows and so
Give a probabilistic interpretation of μ[…] and κ[…] as defined in the previous exercise.
Give the inverse formulae for μ[…]in terms of κ[…].
By first examining the derivatives of the null moments of log likelihood derivatives, show that the derivatives of the null cumulants satisfy State the generalization of this result that applies to(i) cumulants of arbitrary order (Skovgaard, 1986a)(ii) derivatives of arbitrary order.Hence derive
Using expansions (7.3) for the non-null moments, derive expansions (7.4)for the non-null cumulants.
Express the four equations (7.6) simultaneously using matrix notation in the form U = BV and give a description of the matrix B. It may be helpful to define βi r = δi r.
Show that equations (7.7) may be written in the form U = AU where A has the same structure as B above.
From the definition K∗Y(y) = supξ{ξiy i − KY (ξ)}show that the Legendre transformation is invariant under affine transformation of coordinates on χ.
By writing ξi as a polynomial in zξi = airz r + airsz rz s/2! + airstz rz sz t/3! + …, solve the equationκr,iξi + κr,i,jξiξj/2! + κr,i,j,kξiξjξk/3! + ⋯ = z rby series reversal. Hence derive expansion (6.12) for K*(x).
Consider the conjugate density fX(x; θ) as given in the previous exercise, where K(θ) is the cumulant generating function for f0(x) and K*(x) is its Legendre transform. Show that Eθ {log (fX(X;θ)f0(X) )} = K∗ (Eθ (X))where Εθ(.) denotes expectation under the conjugate density. [In this
Let X be a random variable with density function fX (x; θ) = exp {θix i − K (θ)}f0 (x)depending on the unknown parameter θ. Let θ have Jeffreys’s prior densityπ (θ) = |Krs (θ)|1/2.Using Bayes’s theorem, show that the posterior density for θ given x is approximately π (θ|x) ≃ c
By using Taylor expansions for S(x) and T(x) in (6.15), show that, for normal deviations, the tail probability (6.15) reduces to 1 − Φ(T) + ϕ (T) (−ρ3 6 +5ρ2 3−3ρ4 24 T) + O (n−3/2).Hence deduce (6.14).
Using (6.11), show that the Legendre transformation K*(x; θ) of the exponentially tilted density satisfies the partial differential equations Hence show that in the univariate case, K∗ (x; θ) = ∫xμx − t v (t)dt, where μ = Κ′(θ) and ν(μ) = Κ″(θ), (Wedderbum, 1974; Nelder &
Using the asymptotic expansion for the normal tail probability 1 − Φ(x) ≃ϕ(x)x x → ∞and taking x > E(X), show, using (6.14), that n−1 log pr{Xn > x} → −K∗ (x)as n → ∞, where X̄n is the average of n independent and identically distributed random variables. By retaining further
In the notation of Exercise 6.18, show that the second derivative of K*(x, 1 − x; 1/2) at x = 1/2 is 8, whereas the conditional variance of X1 given that X1+ X2 = 1 is 1/12. Hence deduce that the double saddlepoint approximation to the conditional density of X1 is not the same as applying the
Extend the results described in the previous exercise to gamma random variables having mean μ and indices ν1, ν2. Replace X̄ by an appropriately weighted mean.
Let X1, X2 be independent exponential random variables with common mean μ. Show that the Legendre transformation of the joint cumulant generating function is K∗ (x1, x2; μ) =x1+x2−2μμ − log (x1μ ) − log (x2μ ).Show also that the Legendre transformation of the cumulant generating
Show that, in the case of the binomial distribution with index m and parameter π, the Legendre transformation is y log (yμ ) + (m − y)log (m−y m−μ )where μ = mπ. Hence show that the saddlepoint approximation isρ2 13 (θˆ) = ρ∗2 13(x) ρ2 23 (θˆ) = ρ∗2 23(x)ρ4 (ˆθ) =
Show that Y r defined in the previous exercise has third cumulant of order O(n−3/2) and fourth cumulant of order O(n−1). Hence show that 2nK*(X̄n) has a non-central X 2 pdistribution for which the rth cumulant is{1 + b/n}r2 r−1 (r − 1)!p + O (n−2).Find an expression for b in terms of the
Show that if Z r = X r n − κr and then Y = Op(1) and 2nK∗ (Xn) = Y rY sκr,s + O (n−2).
By using the expansion for ξi given in Section 6.2.4, show that the maximum likelihood estimate of θ based on X̄n in the exponential family(6.14) has bias E (n 1/2θˆr) = −1 2 n−1/2κi,j,kκi,rκj,k + O (n−3/2).
Using the results given in the previous exercise, show, using the notation of Section 6.3, that K∗ijk(x) = −KrstKriKsjKtk K∗ijkl(x) = − {Krstu − KrsvKtuwKvw [3]}KriKsjKtkKul
Using (6.12) or otherwise, show that, for each x in χ, where all functions on the right are evaluated at ξr = K*r(x), the saddlepoint image of x.
Show that the matrix inverse of Κrs(ξ) is K*rs (x), where x r = Κr(ξ)corresponds to the saddlepoint.
Using expansion (6.12), find the mean of 2nK*(X̄n) up to and including terms that are of order O(n−1).
Show that under transformation of coordinates on Θ, the coefficient matrix B transforms to B̄, where B = ABA∗−1 and give a description of the matrix A*.
Using the results of the previous three exercises, show that the arrays Vr, Vrs,…, defined at (7.6), behave as tensors under change of coordinates on Θ.
Using expression (7.13) for b(θ) together with the expressions given for the cumulants in Section 7.5.1, derive (7.17) as the Bartlett correction applicable to the exponential regression model (7.16). P=869
Using the notation of the previous exercise, show that for any constants a,b,c,d satisfying ad − bc ≠ 0, Yi = (a + bXi)/ (c + dXi) i = 1,…, n are independent and identically distributed Cauchy random variables. Deduce that the derived statistic A*with components A*i = (Yi − Y )/sY has a
Let X1, …, Xn be independent and identically distributed Cauchy random variables with unknown parameters (θ, τ). Let X̄ and s 2X be the sample mean and sample variance respectively. By writing Xi = θ + τεi, show that the joint distribution of the configuration statistic A with components Ai
Show that if (X1, X2) has the bivariate normal distribution with zero mean, variances σ2 1, σ2 2 and covariance ρσ1σ2, then the ratio U = X1/X2 has the Cauchy distribution with median θ = ρσ1/(σ2 and dispersion parameter τ2 = σ2 1 (1 − ρ2)/σ2 2.Explicitly, fU (u; θ, τ) = τ
Repeat the calculations of the previous exercise for the Poisson log-linear model of Section 7.5.2. P=869
For the exponential regression model of Section 7.5.1, show that the O(n−1) bias of βˆ is bias (ˆβ) = −(XTX)−1XTV/2.Show that, for a simple random sample of size 1, the bias is exactly −γ, whereγ = 0.57721 is Euler’s constant. Find the exact bias for a simple random sample of size n
Derive the expression analogous to (7.20) for the log link function replacing the reciprocal function. Simplify in the case p = 1. P=869
Show that (7.20) vanishes if X is the incidence matrix for an unbalanced one-way layout. P=869
By considering the sum of two independent inverse Gaussian random variables, justify the interpretation of v in (7.19) as an ‘effective sample size’. P=869
is not unique. Demonstrate explicitly that two such ancillaries are not jointly ancillary. [This construction is specific to the two-parameter Cauchy problem.]
Suppose, in the notation previously established, that n = 3. Write the ancillary in the form {sign(X3− X2), sign(X2− X1)}, together with an additional component AX = (X(3) − X(2))/ (X(2) − X(1)), where X(j) are the ordered values of X. Show that ΑX is a function of the sufficient
Show that the maximum of the log likelihood function (8.18) is given by (8.19).
Show that the Legendre transformation, K*(u; θ), evaluated at ur = Kr (θ) + νr,s,tuθsθtθu[3]/3!, is zero to the same order of approximation.
Show that the Legendre transformation of
Show that the tensorial decomposition of the likelihood ratio statistic in (7.13) is not unique but that all such decompositions are orthogonal statistics in the sense used in Section 8.5 above.
Suppose in the previous exercise that σ2 = 1. Show that this information has no effect on the sufficient statistic but gives rise to ancillaries, namely X1, X2, X3, no two of which are jointly ancillary.
Suppose that (Χ1, Χ2, X3) have the trivariate normal distribution with zero mean and intra-class covariance matrix with variances σ2 and correlations ρ. Show that−1 2 ≤ ρ ≤ 1. Prove that the moments of Χ1 + ωΧ2 + ω2Χ3 and X1 + ωΧ3 + ω2Χ2 are independent of both parameters, but
In the notation of the previous exercise, suppose that it is required to test the hypothesis H0: ρ = 0, and that the observed values are x1 = 2, x2 = 1. Compute the conditional tail areas pr(T ≥ t|A1 = a1) and pr(T ≥ t|A2 = a2). Comment on the appropriateness of these tail areas as measures of
Suppose that (X1, X2) are bivariate normal variables with zero mean, unit variance and unknown correlation ρ. Show that A1 = X1 and A2 = X2 are each ancillary, though not jointly ancillary, and that neither is a component of the sufficient statistic. Let T =X1 X2. Show that
Normal hypersphere model: Repeat the calculations of the previous exercise, replacing the spherical surface in 3-space by a p-dimensional spherical surface in R p+1. Show that the Bartlett adjustment reduces to b (θ) =−(p−2)4ρ2 0, which is negative for p ≥ 3. (McCullagh & Cox, 1986).
Normal spherical model: Suppose Y is a trivariate normal random vector with mean (ρ cos(θ) cos(ϕ), ρ cos(θ) sin(ϕ), ρ sin(θ)) and covariance matrix n−1 I3. Let ρ = ρ0 be given.(i) Find the maximum likelihood estimate of (θ, ϕ).(ii) Derive the likelihood ratio statistic for testing the
Using the results derived in the previous exercise for n = 4 and ρ0 = 1, construct 95% confidence intervals for θ based on (a) the score statistic and (b)the likelihood ratio statistic. For numerical purposes, consider the two data values (y1, y2) = (0.5, 0.0) and (1.5, 0.0). Is the value θ = π
Find expressions for the first term in (7.17) for the four designs mentioned in the previous exercise. P=869
Show that the second term on the right in (7.17) is zero if X is the incidence matrix for(i) an unbalanced one-way layout(ii) a randomized blocks design (two-way design) with equal numbers of replications per cell.(iii) a Latin square design.Show that the second term is not zero if X is the model
Repeat the calculations of the previous exercise, this time for the exponential distribution in place of the Poisson. Compare numerically the transformation ±W1/2 with the Wilson-Hilferty cube root transformation 3n 1/2 {(Yμ0)1/3+1 9n − 1}, which is also normally distributed to a high order of
Using the notation of the previous two exercises, let ±W1/2 be the signed square root of W, where the sign is that of Ȳ − μ0 Using the results given in¯Section 7.4.5, or otherwise, show that Hence show that under H0: μ = μ0, S =±W 1/2+(nμ0)−1/2/6 1+(16nμ0)−1 has the same moments as
Derive the result stated in the previous exercise directly from (7.18).Check the result numerically for μ0 = 1 and for n = 1, 5, 10. Also, check the variance of W numerically. You will need either a computer or a programmable calculator.
Suppose that Y1, …, Yn are independent Poisson random variables with mean μ. Show that the likelihood ratio statistic for testing H0: μ = μ0 against an unspecified alternative is W = 2n{Y log (Y /μ0) − (Y − μ0)}.By expanding in a Taylor series about Ȳ = μ0 as far as the quartic term,
Let s 2i, i = 1, …, k be k independent mean squares calculated from independent normal random variables. Suppose E (s 2i ) = σ2 i, var (s 2i ) = 2σ2 i /mi where mi is the number of degrees of freedom for s 2i. Derive the likelihood ratio statistic, W, for testing the hypothesis H0 : σ2 i = σ2
Suppose that Υ1, …, Yn are independent and identically distributed on the intervalθ, θ + 1. Show that the likelihood function is constant in the interval (y(n) − 1, y(1))and is zero otherwise. Hence, interpret r = y(n) − y(1) as an indicator of the shape of the likelihood
Using expression (7.15) for Wr, show that the joint third and fourth cumulants are O(n−3/2) and O(n−2) respectively. Derive the mean vector and covariance matrix. Hence justify the use of the Bartlett adjustment as a multiplicative factor.
Normal circle model: Suppose Y is a bivariate normal random vector with mean (ρ cos(θ), ρ sin(θ)) and covariance matrix n−1/2. Let ρ = ρ0 be given.(i) Find the maximum likelihood estimate of θ.(ii) Derive the likelihood ratio statistic for testing the hypothesis Η0:θ = 0.(iii) Interpret
Comment on the similarity between the correction terms, (7.17) and(7.18). P=869
Simplify expression (7.18) in the case of a two-way contingency table and a model that includes no interaction term. Show that the second term is zero. P=869
Let X1,…,Xn be independent and identically distributed p-dimensional random vectors having cumulants κr, κr,s, κr,s,t,…. Define the random vector Z(n)by Zr(n) =n∑j=1 Xr j exp (2πij/n)where i2 = −1. Using the result in the previous exercise or otherwise, show that the nthorder moments of
For the multinomial distribution with p = rank(κi,j) = k − 1, show that and hence thatρ4 = ρ2 13 − 2/m = ρ2 23 − k/m, showing that the inequalities in Exercises 2.12 and 2.14 are sharp for m = 1. Show also that the minimum value of ρ2 13 for the multinomial distribution is (k − 2)/m.
Hölder’s inequality for a pair of random variables X and Y is E |XY | ≤ {E|X|p}1/p{E|Y |q}1/q where p−1 + q−1= 1. Deduce from the above that{E |X1X2 …Xr|}r ≤ E|X1|r …E|Xr|r for random variables X1, …,Xr. Hence prove that if the diagonal elements of cumulant tensors are finite then
Using (2.6) and (2.7), express κi,jkl = cov(X i,XjX kX l) in terms of ordinary moments and hence, in terms of ordinary cumulants.
Let hr(x) be the standardized Hermite polynomial of degree r satisfying ∫hr(x)hs(x)ϕ(x)dx = δrs where ϕ(χ) is the standard normal density. If Xi = hi(Z) where Z is a standard normal variable, show that X1,… are uncorrelated but not independent. Show also that the second cumulant of n 1/2X̅
Show, for the multinomial distribution with index m = 1, that the moments are κi = πi,κij = πiδij, κijk = πiδijk¯¯¯¯¯¯¯and so on, where no summation is implied. Hence give anκi = mπiκi,j = m {πiδij − πiπj}κi,j,k = m {πiδijk − πiπjδik [3] + 2πiπjπk}κi,j,k,l =
Showing 6100 - 6200
of 6613
First
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
Step by Step Answers