New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
statistical techniques in business
Testing Statistical Hypotheses Volume I 4th Edition E.L. Lehmann, Joseph P. Romano - Solutions
Consider a sequence {Pn, Qn} with likelihood ratio Ln defined in(14.36). Assume L(Ln|Pn) d→ W, where P{W = 0} = 0; show Pn is contiguous to Qn. Also, under (14.41), deduce that Pn is contiguous to Qn and hence Pn and Qn are mutually contiguous if and only if μ = −σ2/2.
Suppose Qn is contiguous to Pn and let Ln be the likelihood ratio defined by (14.36). Show that EPn (Ln) → 1. Is the converse true?
Fix two probabilities P and Q and let Pn = Pn and Qn = Qn. Show that {Pn} and {Qn} are contiguous iff P = Q.
Fix two probabilities P and Q and let Pn = P and Qn = Q. Show that {Pn} and {Qn} are contiguous iff P and Q are absolutely continuous.
Show the convergence (14.35).
Prove (14.31).
Assume {Pθ, θ ∈ } is L1-differentiable, so that (14.93) holds.For simplicity, assume k = 1 (but the problem generalizes). Let φ(·) be uniformly bounded and set β(θ) = Eθ[φ(X)]. Show β(θ) exists at θ0 andβ(θ0) =φ(x)ζ(x, θ0)μ(dx) . (14.94)Hence, if {Pθ} is q.m.d. at θ0 with
Suppose {Pθ, θ ∈ } is a model with an open subset of IRk , and having densities pθ(x) with respect to μ. Define the model to be L1-differentiable atθ0 if there exists a vector of real-valued functions ζ(·, θ0) such that|pθ0+h(x) − pθ0 (x) − ζ(x, θ0), h|dμ(x) = o(|h|) (14.93)as
Suppose X1,..., Xn are i.i.d. and uniformly distributed on (0, θ).Let pθ(x) = θ−1 I{0 < x < θ} and Ln(θ) = i pθ(Xi). Fix p and θ0. Determine the limiting behavior of Ln(θ0 + hn−p)/Ln(θ0) under θ0. For what p and h is the limiting distribution nondegenerate?
To see what might happen when the parameter space is not open, let f0(x) = x I{0 ≤ x ≤ 1} + (2 − x)I{1 < x ≤ 2} .Consider the family of densities indexed by θ ∈ [0, 1) defined by pθ(x) = (1 − θ2 ) f0(x) + θ2 f0(x − 2) .Show that the condition (14.5) holds when θ0 = 0, if it is
Suppose {Pθ} is q.m.d. at θ0. Show Pθ0+h{x : pθ0 (x) = 0} = o(|h|2)as |h| → 0. Hence, if X1,..., Xn are i.i.d. with likelihood ratio Ln,h defined by(14.12), show that Pnθ0+hn−1/2 {Ln,h = ∞} → 0 .
Suppose {Pθ} is q.m.d. at θ0 with derivative η(·, θ0). Show that, on{x : pθ0 (x) = 0}, we must have η(x, θ0) = 0, except possibly on a μ-null set. Hint:On {pθ0 (x) = 0}, write 0 ≤ n1/2 p 1/2θ0+hn−1/2 (x) = h, η(x, θ0) + rn,h(x) , where r 2 n,h(x)μ(dx) → 0. This implies, with
Prove Theorem 14.2.2 using an argument similar to the proof of Theorem 14.2.1.
In Example 14.2.5, show that {[ f (x)]2/ f (x)}dx is finite iff β >1/2.
In Examples 14.2.3 and 14.2.4, find the quadratic mean derivative and I(θ).
Show that the definition of I(θ) in Definition 14.2.2 does not depend on the choice of dominating measure μ.
to construct a family of distributions Pθ with θ ∈ IR2, defined for all small |θ|, such that P0,0 = P, the family is q.m.d. at θ = (0, 0) with score vector at θ = (0, 0) given by(u1(x), u2(x)). If S is the real line, construct the Pθ that works even if Pθ is required to be smooth if P and
Fix a probability P on S and functions ui(x) such that ui(x)d P(x) = 0 and u2 i (x)d P(x) < ∞, for i = 1, 2. Adapt
To see what might happen when the parameter space is not open, let(iii) Suppose u2(x)d P(x) < ∞. Define pθ(x) = C(θ)2[1 + exp(−2θu(x))]−1 .Show this family is q.m.d. at θ = 0, and calculate the score function and I(0). [The constructions in this problem are important for nonparametric
Suppose {Pθ} is q.m.d. at θ0. Show Pθ0+h{x : pθ0 (x) = 0} = o(|h|2)as |h| → 0. Hence, if X1,..., Xn are i.i.d. with likelihood ratio Ln,h defined by(14.12), show that Pnθ0+hn−1/2 {Ln,h = ∞} → 0 .
Suppose {Pθ} is q.m.d. at θ0 with derivative η(·, θ0). Show that, on{x : pθ0 (x) = 0}, we must have η(x, θ0) = 0, except possibly on a μ-null set. Hint:On {pθ0 (x) = 0}, write 0 ≤ n1/2 p 1/2θ0+hn−1/2 (x) = h, η(x, θ0) + rn,h(x) , where r 2 n,h(x)μ(dx) → 0. This implies, with
Prove Theorem 14.2.2 using an argument similar to the proof of Theorem 14.2.1.
In Example 14.2.5, show that {[ f (x)]2/ f (x)}dx is finite iff β >1/2.
In Examples 14.2.3 and 14.2.4, find the quadratic mean derivative and I(θ).
Show that the definition of I(θ) in Definition 14.2.2 does not depend on the choice of dominating measure μ.
to construct a family of distributions Pθ with θ ∈ IR2, defined for all small |θ|, such that P0,0 = P, the family is q.m.d. at θ = (0, 0) with score vector at θ = (0, 0) given by(u1(x), u2(x)). If S is the real line, construct the Pθ that works even if Pθ is required to be smooth if P and
Fix a probability P on S and functions ui(x) such that ui(x)d P(x) = 0 and u2 i (x)d P(x) < ∞, for i = 1, 2. Adapt
Fix a probability P. Let u(x) satisfyu(x)d P(x) = 0 .(i) Assume supx |u(x)| < ∞, so that pθ(x) = [1 + θu(x)]defines a family of densities (with respect to P) for all small |θ|. Show this family is q.m.d. at θ = 0. Calculate the quadratic mean derivative, score function, and I(0).(ii)
Suppose X and Y are independent, with X distributed as Pθ and Y as P¯θ, as θ varies in a common index set . Assume the families {Pθ} and {P¯θ} are q.m.d. with Fisher Information matrices IX (θ) and IY (θ), respectively. Show that the model based on the joint data (X, Y ) is q.m.d. and its
Suppose gn is a sequence of functions in L2(μ) and, for some function g,(gn − g)2dμ → 0. If h2dμ < ∞, show that hgndμ → hgdμ.
Suppose gn is a sequence of functions in L2(μ); that is, g2 n dμ < ∞.Assume, for some function g,(gn − g)2dμ → 0. Prove that g2dμ < ∞.
Generalize Example 14.2.2 to the case of a multiparameter exponential family. Compare with the result of Problem 14.1.
Generalize Example 14.2.1 to the case where X is multivariate normal with mean vector θ and nonsingular covariance matrix .
In the setting of Section 13.5.3, show that the Bonferroni test that rejects H0 when Mn ≥ z1− αn is equivalent to the test that rejects Hi if min pˆi ≤ α/n, where pˆi = 1 − (Xi).
Prove (13.61).
Prove Lemma 13.5.1 by using Problem 13.32. That is, if 1 < βn =o(n) and Yn,i = exp(δn Xi − δ2 n2 ) , show that E[|Yn,i − 1|I{|Yn,i − 1| > βn}] → 0 . (13.66)Since Yn,i > 0 and βn > 1, this is equivalent to showing E[(Yn,i − 1)I{Yn,i > βn + 1}] → 0 . (13.67)The event {Yn,i > λ + 1}
Prove Lemma 13.5.1 as follows. Let η = 1 − √r. Let L˜ n = 1 nn i=1 exp(δn Xi − δ2 n2 )I{Xi ≤ 2 log n} .First, show Ln − L˜ n P→ 0 (using Problem 13.41). Then, show E(L˜ n) = (η2 log(n)) → 1 .The proof then follows by showing V ar(L˜ n) → 0. To this end, show V ar(L˜ n)
Under the setting of Lemma 13.5.1 calculate V ar(Ln) and determine which values of r it tends to 0.
Let X1,..., Xn be i.i.d. N(0, 1). Let Mn = max(X1,..., Xn).(i) Show that P{Mn ≥ √2 log n} → 0.(ii) Compute the limit of P{Mn ≥ z1− αn }.
Prove Lemma 13.5.1 by using Problem 13.32. That is, if 1 < βn =o(n) and Yn,i = exp(δn Xi − δ2 n2 ) , show that E[|Yn,i − 1|I{|Yn,i − 1| > βn}] → 0 . (13.66)Since Yn,i > 0 and βn > 1, this is equivalent to showing E[(Yn,i − 1)I{Yn,i > βn + 1}] → 0 . (13.67)The event {Yn,i > λ + 1}
Prove Lemma 13.5.1 as follows. Let η = 1 − √r. Let L˜ n = 1 nn i=1 exp(δn Xi − δ2 n2 )I{Xi ≤ 2 log n} .First, show Ln − L˜ n P→ 0 (using Problem 13.41). Then, show E(L˜ n) = (η2 log(n)) → 1 .The proof then follows by showing V ar(L˜ n) → 0. To this end, show V ar(L˜ n)
Under the setting of Lemma 13.5.1 calculate V ar(Ln) and determine which values of r it tends to 0.
Let X1,..., Xn be i.i.d. N(0, 1). Let Mn = max(X1,..., Xn).(i) Show that P{Mn ≥ √2 log n} → 0.(ii) Compute the limit of P{Mn ≥ z1− αn }.
(i) If φ(·) denotes the standard normal density and Z ∼ N(0, 1), then for any t > 0,(1 t − 1 t 3 )φ(t) < P{Z ≥ t} ≤ φ(t)t . (Prove the right-hand inequality.(ii) Prove the left inequality in (13.64). Hint: Feller (1968) p.179 notes the negative of the derivative of the left side ( 1 t
For the Chi-squared test discussed in Section 13.5.1, assume thatδ2 n /√2n → ∞. Show that the limiting power of the Chi-squared test against such an alternative sequence tends to one.
Let Yn,1,..., Yn,n be i.i.d. bernoulli variables with success probability pn, where npn = λ and λ1/2 = δ. Let Un,1,..., Un,n be i.i.d. uniform variables on(−τn, τn), where τ 2 n = 3p2 n. Then, let Xn,i = Yn,i + Ui , so that Fn is the distribution of Xn,i . (Note that n1/2μ(Fn)/σ(Fn) =
Prove the second equality in (13.44). In the proof of Lemma 13.4.2, show that κn(n) → 0.
Consider the problem of testing μ(F) = 0 versus μ(F) = 0, for F ∈ F0, the class of distributions supported on [0, 1]. Let φn be Anderson’s test.(i) If|n1/2μ(Fn)| ≥ δ > 2sn,1−α , then show that EFn (φn) ≥ 1 − 1 2(2sn,1−α − δ)2 , where sn,1−α is the 1 − αquantile of
Prove Lemma 13.4.5.
In the proof of Theorem 13.4.2, prove Sn/σ(Fn) → 1 in probability.
Suppose F satisfies the conditions of Theorem 13.4.4. Assume there exists φn such that sup F∈F: μ(F)=0 EF (φn) → α .Show that lim sup n EF (φn) ≤ αfor every F ∈ F.
In Lemma 13.4.2, show that Condition (13.41) can be replaced by the assumption that, for some βn = o(n1/2), lim sup n→∞EGn [|Yn,i − μ(Gn)|I{|Yn,i − μ(Gn)| ≥ βn}] = 0.Moreover, this condition only needs to hold if βn = o(n) if it is also known that supn EGn |Yn,i − μ(Gn)| < ∞.
Let φn be the classical t-test for testing the mean is zero versus the mean is positive, based on n i.i.d. observations from F. Consider the power of this test against the distribution N(μ, 1). Show the power tends to one as μ → ∞.Section13.4
Assuming F is absolutely continuous with 4 moments, verify(13.39).
In Theorem13.3.2, suppose S2 n is defined with its denominator n − 1 replaced by n. Derive the explicit form for q2(t, F) in the corresponding Edgeworth expansion.
When sampling from a normal distribution, one can derive an Edgeworth expansion for the t-statistic as follows. Suppose X1,..., Xn are i.i.d. N(μ, σ2)and let tn = n1/2(X¯ n − μ)/Sn, where S2 n is the usual unbiased estimate of σ2. Let be the standard normal c.d.f. and let = ϕ. Show P{tn
Let X1,..., Xn be a sample from N(ξ, σ2), and consider the UMP invariant level-α test of H : ξ/σ ≤ θ0 (Section 6.4). Let αn(F) be the actual significance level of this test when X1,..., Xn is a sample from a distribution F with E(Xi) = ξ, V ar(Xi) = σ2 < ∞. Then the relation αn(F) →
is not robust against nonnormality.
Show that the test derived in
In the preceding problem, investigate the rejection probability when the Fi have different variances. Assume min ni → ∞ and ni /n → ρi .
For i = 1,...,s and j = 1,..., ni , let Xi,j be independent, with Xi,j having distribution Fi , where Fi is an arbitrary distribution with mean μi and finite common variance σ2. Consider testing μ1 =···= μs based on the test statistic(13.29), which is UMPI under normality. Show the test
The size of each of the following tests is robust against nonnormality:1. the test (7.24) as b → ∞, 2. the test (7.26) as mb → ∞, 3. the test (7.28) as m → ∞.
If i,i are defined as in (13.19), show that n i=1 2 i,i = s.Hint: Since the i,i are independent of A, take A to be orthogonal.
If ξi = α + βti + γui , express Condition (13.20) in terms of the t’s and u’s.
with cn = nk .
Let cn = u0 + u1n +···+ uknk , ui ≥ 0 for all i. Then cn satisfies(13.10). What if cn = 2n? Hint: Apply
Let {cn} and {cn} be two increasing sequences of constants such that cn/cn → 1 as n → ∞. Then {cn} satisfies (13.10) if and only if {cn} does.
Show that (13.10) holds whenever cn tends to a finite nonzero limit, but the condition need not hold if cn → 0.
Suppose (13.20) holds for some particular sequence (n) with fixed s. Then it holds for any sequence (n) ⊆ (n) of dimension s < s.Hint: If is spanned by the s columns of A, let be spanned by the first scolumns of A.
In the two-way layout of the preceding problem give examples of submodels (1) and (2) of dimensions s1 and s2, both less than ab, such that in one case Condition (13.20) continues to require ni j → ∞ for all i and j but becomes a weaker requirement in the other case.
Let Xijk (k = 1,..., ni j; i = 1,,..., a; j = 1,...,b) be independently normally distributed with mean E(Xijk ) = ξi j and variance σ2. Then the test of any linear hypothesis concerning the ξi j has a robust level provided ni j → ∞for all i and j.
In Example 13.2.3, verify the Huber Condition holds.
Verify (13.15).
Verify the claims made in Example 13.2.1.
Prove Lemma 13.2.3. Hint: For part (ii), use Problem 11.72.
Prove (i) of Lemma 13.2.2.
Determine the maximum asymptotic level of the one-sided t-test when α = .05 and m = 2, 4, 6: (i) in Model A; (ii) in Model B.
Show that the conditions of Lemma 13.2.1 are satisfied and γ has the stated value: (i) in Model B; (ii) in Model C.
In Model A, suppose that the number of observations in group i is ni . if ni ≤ M and s → ∞ show that the assumptions of Lemma 13.2.1 are satisfied and determine γ.
Verify the formula for V ar(X¯) in Model A.
(i) Given ρ, find the smallest and largest value of (13.2) as σ2/τ 2 varies from 0 to ∞.(ii) For nominal level α = 0.05 and ρ = 0.1, 0.2, 0.3, 0.4, determine the smallest and the largest asymptotic level of the t-test as σ2/τ 2 varies from 0 to ∞.
Under the assumptions of Lemma 13.2.1, compute Cov(X2 i , X2 j) in terms of ρi,j and σ2. Show that V ar(n−1 n i=1 X2 i ) → 0 and hence n−1 n i=1 X2 iP→
Let (Yi, Zi) be i.i.d. bivariate random vectors in the plane, with both Yi and Zi assumed to have finite nonzero variances. Let μY = E(Y1) and μZ =E(Z1), let ρ denote the correlation between Y1 and Z1, and let ρˆn denote the sample correlation, as defined in (11.29).(i). Under the assumption
Generalize the previous problem to the two-sample t-test.
(i) Let X1,..., Xn be a sample from N(ξ, σ2). For testingξ = 0 against ξ > 0, show that the power of the one-sided one-sample t-test against a sequence of alternatives N(ξn, σ2) for which n1/2ξn/σ → δ tends to 1 −(z1−α − δ).(ii) The result of (i) remains valid if X1,..., Xn are a
Consider points on a lattice of the form (i, j) where i and j are integers from 0 to n. Each of these (n + 1)2 points can be considered a vertex of a graph. Consider connecting edges adjoining (i, j) and (i + 1, j) or (i, j) and(i, j + 1), so that only edges between nearest vertices are considered
Verify (12.80), (12.81) and (12.82). Based on the bound (12.82), consider an asymptotic regime where p ∼ n−β for some β ≥ 0. For what β does the bound tend to zero, so that a central limit theorem for T holds?
An alternative characterization of the Wasserstein metric is the following (which you do not have to show): dW (X, Y ) is the infimum of E|X − Y|over all possible joint distributions of (X, Y) such that the marginal distributions of X and Y are those of X and Y , respectively. However, do show
Use Theorem 12.5.2 to derive a Central Limit Theorem for the sample mean of an m-dependent stationary process. State your assumptions and compare with Theorem 12.4.1.
Complete the details in Example 12.5.1 to get an explicit bound from Theorem 12.5.2 for dW . What conditions are you assuming?
Finish the proof of Theorem 12.5.2 by showing V ar⎛⎝n i=1 j∈Ni Xi X j⎞⎠ ≤ 14D3n i=1 E(|Xi|4) .Hint: Use the arithmetic–geometric mean inequality.
Theorem 12.5.1 provides a bound for dW (W, Z) where W =n−1/2 n i=1 Xi and the Xi are independent with mean 0 and variance one. Extend the result so that V ar(Xi) = σ2 i may depend on i.
Show that, if E(X2) = 1, then E|X| ≤ E(|X|3).
If Z is a real-valued random variable with density bounded by C, then show that, for any random variable W, dK (W, Z) ≤ 2CdW (W, Z) , where dK is the Kolmogorov–Smirnov (or sup or uniform) metric between distribution functions, and dW is the Wasserstein metric.
Investigate the relationships between dW , dK and dT V , as well as the bounded Lipschitz metric introduced in Problem 11.24. Does convergence of one of them imply convergence of any of the others? If not, illustrate by finding counterexamples.
Show that the Wasserstein metric implies weak convergence; that is, if dW (Xn, X) → 0, then Xn d→ X. Give a counterexample to show the converse is false. Prove or disprove the following claim: For random variables Xn and X with finite first moments, show that dW (Xn, X) → 0 if and only if Xn
Show that, for w > 0, 1 − (w) ≤ min 1 2, 1 w√2πe−w2/2 .Show that this inequality implies fx ≤ √π/2 and f x ≤ 2.
Complete the proof of Lemma 12.5.2 by showing that (12.64) and(12.65) are equivalent, and then showing that (12.66) follows.
Complete the proof of the converse in Lemma 12.5.1. Hint: Use Lemma 12.5.2.
If W ∼ N(0, σ2) with σ = 1, what is the generalization of the characterization (12.61)?
Showing 100 - 200
of 5757
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last
Step by Step Answers