New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
statistical techniques in business
Testing Statistical Hypotheses Volume I 4th Edition E.L. Lehmann, Joseph P. Romano - Solutions
Prove Polyá’s Theorem 11.2.9. Hint: First consider the case of distributions on the real line.
Suppose X1,..., Xn are i.i.d. real-valued random variables with c.d.f. F. Assume ∃θ1 < θ2 such that F(θ1) = 1/4, F(θ2) = 3/4, and F is differentiable, with density f taking positive values at θ1 and θ2. Show that the sample interquartile range (defined as the difference between the 0.75
Let X1,..., Xn be i.i.d. normal with mean θ and variance 1. Let X¯ n be the usual sample mean and let X˜ n be the sample median. Let pn be the probability that X¯ n is closer to θ than X˜ n is. Determine limn→∞ pn.
Generalize Theorem 11.2.8 to the case of the pth sample quantile.
Complete the proof of Theorem 11.2.8 by considering n even.
Let X1,..., Xn be i.i.d. with density p0 or p1, and consider testing the null hypothesis H that p0 is true. The MP level-α test rejects when n i=1r(Xi) ≥Cn, where r(Xi) = pi(Xi)/p0(Xi), or equivalently when 1√n log r(Xi) − E0[log r(Xi)]≥ kn(i) Show that, under H, the left side of (11.41)
Suppose Xn,1,..., Xn,n are i.i.d. Bernoulli trials with success probability pn. If pn → p ∈ (0, 1), show that n1/2[X¯ n − pn] d→ N(0, p(1 − p)) .Is the result true even if p is 0 or 1?
Suppose Xk is a noncentral Chi-squared variable with k degrees of freedom and noncentrality parameter δ2 k .(i) Show that (Xk − k)/(2k)1/2 d→ N(μ, 1) if δ2 k /(2k)1/2 → μ as k → ∞.(ii) If ck,1−α is the 1 − α quantile of the Chi-squared distribution with k degrees of freedom,
In Example 11.2.2, show that Lyapounov’s Condition holds.
Show that Lyapounov’s Central Limit Theorem (Corollary 11.2.1)follows from the Lindeberg Central Limit Theorem (Theorem 11.2.5).
Show that Theorem 11.2.3 follows from Theorem 11.2.2.
Let Xn have characteristic function ζn. Find a counterexample to show that it is not enough to assume ζn(t) converges (pointwise in t) to a functionζ(t) in order to conclude that Xn converges in distribution.
Verify (11.9).
Show that the characteristic function of a sum of independent realvalued random variables is the product of the individual characteristic functions.(The converse is false; counterexamples are given in Romano and Siegel (1986), Examples 4.29–4.30.)
Suppose Xn d→ X. Show that E f (Xn) need not converge to E f (X)if f is unbounded and continuous, or if f is bounded but discontinuous.
Prove the equivalence of (i) and (vi) in the Portmanteau Theorem(Theorem 11.2.1).
Show that x = (x1,..., xk ) is a continuity point of the distribution FX of X if the boundary of the set of (y1,..., yk ) such that yi ≤ xi for all i has probability 0 under the distribution of X. Show by example that it is not sufficient for x to have probability 0 under FX in order for x to be
Let X be N(0, 1) and Y = X. Determine the set of continuity points of the bivariate distribution of (X, Y ).
For a univariate c.d.f. F, show that the set of points of discontinuity is countable.
For each θ ∈ , let fn(θ) be a real-valued sequence. We say fn(θ)converges uniformly (in θ) to f (θ) if supθ∈| fn(θ) − f (θ)| → 0 as n → ∞. If if a finite set, show that the pointwise convergence fn(θ) → f (θ)for each fixed θ implies uniform convergence. However, show the
The nonexistence of (i) semirelevant subsets in Example 10.4.1 and(ii) relevant subsets in Example 10.4.2 extends to randomized conditioning procedures.
Instead of conditioning the confidence sets θ ∈ S(X) on a set C, consider a randomized procedure which assigns to each point x a probabilityψ(x) and makes the confidence statement θ ∈ S(x) with probability ψ(x) when x is observed.7(i) The randomized procedure can be represented by a
Suppose X1 and X2 are i.i.d. with P{Xi = θ − 1} = P{Xi = θ + 1} =1 2 .Let C be the confidence set consisting of the single point (X1 + X2)/2 if X1 = X2 and X1 − 1 if X1 = X2. Show that, for all θ, Pθ{θ ∈ C} = 0.75 , but Pθ{θ ∈ C|X1 = X2} = 0.5 and Pθ{θ ∈ C|X1 = X2} = 1 .[Berger
(i) Under the assumptions of the preceding problem, the uniformly most accurate unbiased (or invariant) confidence intervals for θ at confidence level 1 − α areθ = max(X(1) +d, X(n)) − 1 < θ < min(X(1), X(n) −d) = ¯θ, where d is the solution of the equation 2dn = α if α < 1/2n−1,
Let X1,..., Xn be independently distributed according to the uniform distribution U(θ, θ + 1).(i) Uniformly most accurate lower confidence bounds θ for θ at confidence level 1 − α exist and are given byθ = max(X(1) − k, X(n) − 1), where X(1) = min(X1,..., Xn), X(n) = max(X1,..., Xn),
Let X have probability density f (x − θ), and suppose that E|X|
Let X be a random variable with cumulative distribution function F. If E|X| < ∞, then 0−∞ F(x) dx and ∞0 [1 − F(x)] dx are both finite. [Apply integration by parts to the two integrals.]
(i) Verify the posterior distribution of given x claimed in Example 10.4.1.(ii) Complete the proof of (10.32).
In Example 10.4.1, check directly that the set C = {x : x ≤−k or x ≥ k} is not a negatively biased semirelevant subset for the confidence intervals (X −c, X + c).
In Example 10.3.3,(i) the problem remains invariant under G but not under G;(ii) the statistic D is ancillary.Section 10.4
Let V1,..., Vn be independently distributed as N(0, 1), and given V1 = v1,..., Vn = vn, let Xi (i = 1,..., n) be independently distributed as N(θvi, 1).(i) There does not exist a UMP test of H : θ = 0 against K : θ > 0.(ii) There does exist a UMP conditional test of H against K given the
Let X1,..., Xm and Y1,..., Yn be positive, independent random variables distributed with densities f (x/σ) and g(y/τ ), respectively. If f and g have monotone likelihood ratios in (x, σ) and (y, τ ), respectively, there exists a UMP conditional test of H : τ/σ ≤ 0 against τ/σ > 0 given
Let the real-valued function f be defined on an open interval.(i) If f is logconvex, it is convex.(ii) If f is strongly unimodal, it is unimodal.
Verify the density (10.16) of Example 10.3.2.
Suppose X = (U, Z), the density of X factors into pθ,ϑ(x) = c(θ, ϑ)gθ(u;z)hϑ(z)k(u,z), and the parameters θ, ϑ are unrelated. To see that these assumptions are not enough to insure that Z is S-ancillary for θ, consider the joint density C(θ, ϑ)e− 1 2 (u−θ)2− 1 2 (z−ϑ)2 I(u,z),
In the situation of Example 10.2.2, the statistic Z remains Sancillary when the parameter space is = {(λ, μ) : μ ≤ λ}.
In the situation of Example 10.2.3, X + Y is binomial if and only if = 1.
Assuming the distribution (4.22) of Section 4.9, show that Z is S-ancillary for p = p+/(p+ + p−).
A sample of size n is drawn with replacement from a population consisting of N distinct unknown values{a1,..., aN }. The number of distinct values in the sample is ancillary.
Let X, Y have joint density p(x, y) = 2 f (x) f (y)F(θx y), where f is a known probability density symmetric about 0, and F its cumulative distribution function. Then(i) p(x, y) is a probability density.(ii) X and Y each have marginal density f and are therefore ancillary, but (X, Y )is not.(iii)
Let X be uniformly distributed on (θ, θ + 1), 0 < θ < ∞, let [X]denote the largest integer ≤ X, and let V = X − [X].(i) The statistic V(X) is uniformly distributed on (0, 1) and is therefore ancillary.(ii) The marginal distribution of [X] is given by[X] = [θ] with probability 1 −
In the preceding problem, suppose the probabilities are given by 12 345 6 1−θ6 1−2θ6 1−3θ6 1+θ6 1+2θ6 1+3θ6 Exhibit two different maximal ancillaries.
Consider n tosses with a biased die, for which the probabilities of 1,..., 6 points are given by 123456 1−θ12 2−θ12 3−θ12 1+θ12 2+θ12 3+θ12 and let Xi be the number of tosses showing i points.(i) Show that the triple Z1 = X1 + X5, Z2 = X2 + X4, Z3 = X3 + X6 is a maximal ancillary;
An experiment with n observations X1,..., Xn is planned, with each Xi distributed as N(θ, 1). However, some of the observations do not materialize (for example, some of the subjects die, move away, or turn out to be unsuitable). Let Ij = 1 or 0 as X j is observed or not, and suppose the Ij are
Let X, Y be independently normally distributed as N(θ, 1), and let V = Y − X and W =Y − X if X + Y > 0, X − Y if X + Y ≤ 0.(i) Both V and W are ancillary, but neither is a function of the other.(ii) (V, W) is not ancillary. [Basu (1959).]
In the preceding problem, suppose that the densities of X under E and F are θe−θx and (1/θ)e−x/θ respectively. Compare the UMP conditional and unconditional tests of H : θ = 1 against K : θ > 1.Section10.2
With known probabilities p and q perform either E or F, with X distributed as N(θ, 1) under E or N(−θ, 1) under F. For testing H : θ = 0 againstθ > 0 there exist a UMP unconditional and a UMP conditional level-α test. These coincide and do not depend on the value of p.
Let X1,..., Xn be independently distributed, each with probability p or q as N(ξ, σ2 0 ) or N(ξ, σ2 1 ).(i) If p is unknown, determine the UMP unbiased test of H : ξ = 0 against K :ξ > 0.(ii) Determine the most powerful test of H against the alternative ξ1 when it is known that p = 1 2 , and
The test given by (10.3), (10.8), and (10.9) is most powerful under the stated assumptions.
Under the assumptions of Problem 10.1, determine the most accurate invariant (under the transformation X = −X) confidence sets S(X) with P(ξ ∈ S(X) | E) + P(ξ ∈ S(X) | F) = 2γ.Find examples in which the conditional confidence coefficients γ0 given E and γ1 given F satisfy(i) γ0 < γ1;
Let the experiments of E and F consist in observing X : N(ξ, σ2 0 )and X : N(ξ, σ2 1 ) respectively (σ0 < σ1), and let one of the two experiments be performed, with P(E) = P(F) = 1 2 . For testing H : ξ = 0 against ξ = ξ1, determine values σ0, σ1, ξ1, and α such that(i) α0 < α1; (ii)
In the regression model of Problem 7.8, generalize the confidence bands of Example 9.7.3 to the regression surfaces 1. h1(e1,..., es) = s j=1 e jβj ;2. h2(e2,..., es) = β1 + s j=2 e jβj
to the set of all contrasts.[Use the fact that the event |yi − y0| ≤ for i = 1,...,s is equivalent to the event|s i=0 ci yi| ≤ s i=1 |ci| for all (c0,..., cs) satisfying s i=0 ci = 0.]Note. As is pointed out in Problems 9.37(iii) and 9.43, the intervals resulting from the extension of
In generalization of Problem 9.41, show how to extend the Dunnett intervals of
Dunnett’s method. Let X0 j (j = 1,..., m) and Xik (i = 1,..., s; k = 1,..., n) represent measurements on a standard and s competing new treatments, and suppose the X’s are independently distributed as N(ξ0, σ2) and N(ξi, σ2)respectively. Generalize Problems 9.40 and 9.42 to the problem of
Construct an example [i.e., choose values n1 =···= ns = n and αparticular contrast (c1,..., cs)] for which the Tukey confidence intervals (9.150) are shorter than the Scheffé intervals (9.137), and an example in which the situation is reversed.
to the present situation.
1. Let Xi j (j = 1,... n;i = 1,...,s) be independent N(ξi, σ2), σ2 unknown. Then the problem of obtaining simultaneous confidence intervals for all differences ξ j − ξi is invariant under G0, G2, and the scale changes G3.2. The only equivariant confidence bounds based on the sufficient
In the preceding problem consider arbitrary contrasts ci ξi with ci = 0. The event&&X j − Xi−ξ j − ξi&& ≤ for all i = j (9.149)is equivalent to the event&&&ci Xi −ci ξi&&& ≤2|ci| for all c withci = 0, (9.150)which therefore also has probability γ. This shows how to extend
Tukey’s T -Method. Let Xi (i = 1,...,r) be independent N(ξi, 1), and consider simultaneous confidence intervals L[(i, j); x] ≤ ξ j − ξi ≤ M[(i, j); x] for all i = j. (9.145)The problem of determining such confidence intervals remains invariant under the group G0 of all permutations of
1. In Example 9.7.1, the simultaneous confidence intervals (9.133)reduce to (9.137).2. What change is needed in the confidence intervals of Example 9.7.1 if the v’s are not required to satisfy (9.136), i.e., if simultaneous confidence intervals are desired for all linear functions vi ξi instead
1. In Example 9.7.2, the set of linear functions wiαi =wi(ξi· − ξ··) for all w can also be represented as the set of functions wi ξi·for all w satisfying wi = 0.2. The set of linear functionswi jγi j = wi j(ξi j· − ξi·· − ξ· j· + ξ···)for all w is equivalent to
1. The confidence intervals L(u; y, S) = ui yi − c(S) are equivariant under G3 if and only if L(u; by, bS) = bL(u; y, S) for all b > 0.2. The most general confidence sets (9.131) which are equivariant under G1, G2, and G3 are of the form (9.132).
Let Xi (i = 1,...,r) be independent N(ξi, 1).1. The only simultaneous confidence intervals equivariant under G0 are those given by (9.124).2. The inequalities (9.124) and (9.126) are equivalent.3. Compared with the Scheffé intervals (9.113), the intervals (9.126) for u j ξ j are shorter when u
1. For the confidence sets (9.114), equivariance under G1 and G2 reduces to (9.115) and (9.116) respectively.2. For fixed (y1,..., yr), the statements ui yi ∈ A hold for all (u1,..., ur) with u2 i = 1 if and only if A contains the interval I(y) = [−%Y 2 i , +%Y 2 i ].3. Show that the
1. A function L satisfies the first equation of (9.106) for all u, x, and orthogonal transformations Q if and only if it depends on u and x only through ux, xx, and uu.2. A function L is equivariant under G2 if and only if it satisfies (9.108).
The Tukey T -method leads to the simultaneous confidence intervals&&X j· − Xi·−μj − μi&& ≤Cσˆ √sn(n − 1)for all i, j. (9.144)[The probability of (9.144) is independent of the μ’s and hence equal to 1 − αs.]Section9.6
Show that the Tukey levels (vi) satisfy (9.95) when s is even but not when s is odd.
Prove Lemma 9.5.3 when s is odd.
In Lemma 9.5.2, show that αs−1 = α is necessary for admissibility.
1. For the validity of Lemma 9.5.1 it is only required that the probability of rejecting homogeneity of any set containing {μi1 ,..., μiv1} as a proper subset tends to 1 as the distance between the different groups (9.89) all → ∞, with the analogous condition holding for H2,..., Hr.2. The
In general, show Cs = C∗1 . In the case s = 2, show (9.67).Section9.5
Prove part (i) of Theorem 9.4.3.
In general, the optimality results of Section 9.4 require the procedures to be monotone. To see why this is required, consider Theorem 9.4.2(i). Show the procedure E to be inadmissible. Hint: One can always add large negative values of T1 and T2 to the region u1,1 without violating the FWER
Under the assumptions of Theorem9.4.1, suppose there exists another monotone rule E that strongly controls the FWER, and such that Pθ{dc 0,0} ≤ Pθ{ec 0,0} for all θ ∈ ωc 0,0 , (9.143)with strict inequality for some θ ∈ ωc 0,0. Argue that the ≤ in (9.143) is an equality, and hence
We have suppressed the dependence of the critical constants C1,...,Cs in the definition of the stepdown procedure D, and now more accurately call them Cs,1,...,Cs,s. Argue that, for fixed s, Cs,j is nonincreasing in j and only depends on s − j.
Prove Lemma 9.4.2.
Suppose (X1,..., Xs) has a multivariate c.d.f. F(·). For θ ∈ IRs, let Fθ(x) = F(x − θ) define a multivariate location family. Show that (9.55) is satisfied for this family. (In particular, it holds if F is any multivariate normal distribution.)Moreover, it holds when any subset of the Xi is
Suppose you apply the BH method based on p-values pˆ1,... pˆs. If each p-value is actually recorded twice (so that you now have 2s p-values), how would the two applications of the BH method compare? Repeat by applying the BY method in each case. Comment on the appropriateness of each
Assume the joint distribution of p-values is PRDS on the set I of true null hypotheses.(i) Show that, for any increasing set D, P{(pˆ1,..., pˆs) ∈ D| ˆpi ≤ u} (9.142)is nondecreasing in u for any i ∈ I.(ii) More generally, show that E (pˆ1,..., pˆs) | ˆpi ≤ u is nondecreasing in j
In Example 9.3.1, suppose the multiple testing problem specifies Hi : μi = 0 against Hj : μi > 0, with known. As in the example, assume all components of are nonnegative. Define p-values by pˆi = 1 − (Xi /σi), whereσ2 i = i,i = V ar(Xi) is assumed positive. Show that the joint distribution
The problem points to connections between methods that control the FDP in the sense of (9.49) and methods that control its expected value, the FDR.(i) Show, for any random variable X on [0, 1], we have E(X) − γ1 − γ≤ P{X > γ} ≤E(X)γ .(ii) Apply the above to X = FDP to show that if a
If F is the number of false discoveries of some multiple testing procedure, then show the per-family error rate E(F) satisfies the crude inequalities P{F ≥ 1} ≤ E(F) ≤ s P{F ≥ 1} , where s is the number of hypotheses under test. Hence, if a method controls E(F)at α, then it also controls
The closure method starts with a family of tests of HK to produce a multiple decision rule. Conversely, given any multiple testing decision rule (not necessarily obtained by the closure method), one can use it to obtain tests of HK for any intersection hypothesis. More specifically, for testing
For testing H1,..., Hs based on p-values pˆ1,..., pˆs, suppose the closure method is applied and large values of Tk = Tk (pˆi1 ,..., pˆik ) is used to test the intersection hypothesis HK , where K = {i1,...,ik }. Assume Tk is symmetric in its arguments, and the test rejects HK when Tk exceeds
Verfiy that Hommel’s method as stated in Example 9.2.5 can be obtained by the closure method when using Simes’ tests for the intersection hypotheses.
As in Procedure 9.1.1, suppose that a test of the individual hypothesis Hj is based on a test statistic Tn,j , with large values indicating evidence against the Hj . Assume s j=1 ωj is not empty. For any subset K of {1,...,s}, let cn,K (α, P)denote an α-quantile of the distribution of maxj∈K
Show that the Holm method is a special case of the closure method by using the Bonferroni method to test intersection hypotheses.
Consider testing H1,..., Hs, with Hi specifying θi = 0 against twosided alternatives. In order to control the mixed directional familywise error rate in(9.15), a simple device it to consider the 2s one-sided hypotheses defined as follows.For i = 1,...,s, let Hi be the null hypothesis θi ≤ 0
Show that a stepdown version of Tukey’s method and Duncan’s method controls the FWER.
In Example 9.1.7, verify that the stepdown procedure based on the maximum of X j /√σj,j improves upon the Holm procedure. By Theorem 9.1.3, the procedure has FWER ≤ α. Compare the two procedures in the case σi,i = 1, σi,j = ρif i = j; consider ρ = 0 and ρ → 1.
Under the assumptions of Theorem 9.1.2 and independence of the pvalues, the critical values α/(s − i + 1) can be increased to 1 − (1 − α)1/(s−i+1).For any i, calculate the limiting value of the ratio of these critical values, as s → ∞.
Show that, under the assumptions of Theorem 9.1.2, it is not possible to increase any of the critical values αi = α/(s − i + 1) in the Holm procedure(9.18) without violating the FWER.
Show that Duncan’s method controls the FWER and the mixed directional famliywise error rate at level α. Find an expression for the adjusted p-values for Duncan’s method.
Show that (9.14) implies (9.12). Investigate under what conditions the probability of a Type 3 error can be bounded by α/2.
(i) Under the assumptions of Theorem 9.1.1, suppose also that the pvalues are mutually independent. Show that the Sidák procedure which rejects any Hi for which pˆi < c(α,s) = 1 − (1 − α)1/s controls the FWER at level α.(ii) Compare α/s with c(α,s) and show lim s→∞c(α,s)(α/s) = −
(i) Generalize Theorem9.1.1 to the weighted Bonferroni method. Hint:Part (i) directly generalizes. To show (ii), let J = i with probability αwi and J = 0 with probability 1 − α. Let U ∼ U(0, 1) and let pˆi = αwiU if J = i; otherwise, let pˆi = (1 − αwi)U + wiα. By conditioning on K = i
Show that the Bonferroni procedure, while generally conservative, can have FWER = α by exhibiting a joint distribution for (pˆ1,..., pˆs) and satisfying(9.5) such that P{mini pˆi ≤ α/s} = α.
Provide the missing details in Example 8.7.4. What happens in the case a > 2c1−α?
Find the maximin monotone level α test in Example 8.7.3 for general . Also allow the region ω( ) to be generalized and have the form {θ : θi ≥i for some i}, where the i may vary with i.
Showing 300 - 400
of 5757
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last
Step by Step Answers