New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
statistical techniques in business
Testing Statistical Hypotheses 3rd Edition Erich L. Lehmann, Joseph P. Romano - Solutions
Dunnett’s method. Let X0j (j = 1,...,m) and Xik (i =1,...,s; k = 1,...,n) represent measurements on a standard and s competing new treatments, and suppose the X’s are independently distributed as N(ξ0, σ2)and N(ξi, σ2) respectively. Generalize Problems 9.29 and 9.31 to the problem of
Construct an example [i.e., choose values n1 = ··· = ns = n and α particular contrast (c1,...,cs)] for which the Tukey confidence intervals(9.108) are shorter than the Scheff´e intervals (9.96), and an example in which the situation is reversed.
to the present situation.
(i) Let Xij (j = 1,...n;i = 1,...,s) be independent N(ξi, σ2), σ2 unknown. Then the problem of obtaining simultaneous confidence intervals for all differences ξj − ξi is invariant under G0, G2, and the scale changes G3.(ii) The only equivariant confidence bounds based on the sufficient
In the preceding problem consider arbitrary contrasts ciξi with ci = 0. The event|(Xj − Xi) − (ξj − ξi)| ≤ ∆ for all i = j (9.107)is equivalent to the event%%%ciXi −ciξi%%% ≤ ∆2|ci| for all c withci = 0, (9.108)which therefore also has probability γ. This shows how to
Tukey’s T-Method. Let Xi (i = 1,...,r) be independent N(ξi, 1), and consider simultaneous confidence intervals L[(i, j); x] ≤ ξj − ξi ≤ M[(i, j); x] for all i = j. (9.103)The problem of determining such confidence intervals remains invariant under the group G0 of all permutations of the
(i) In Example 9.5.1, the simultaneous confidence intervals(9.92) reduce to (9.96).(ii) What change is needed in the confidence intervals of Example 9.5.1 if the v’s are not required to satisfy (9.95), i.e., if simultaneous confidence intervals are desired for all linear functions viξi instead
(i) In Example 9.5.2, the set of linear functions wiαi =wi(ξi· − ξ··) for all w can also be represented as the set of functions wiξi· for all w satisfying wi = 0.(ii) The set of linear functions wijγij = wij (ξij· − ξi·· − ξ·j· + ξ···)for all w is equivalent to
(i) The confidence intervals L(u; y, S) = uiyi − c(S) are equivariant under G3 if and only if L(u; by, bS) = bL(u; y, S) for all b > 0.(ii) The most general confidence sets (9.90) which are equivariant under G1, G2, and G3 are of the form (9.91).
Let Xi (i = 1,...,r) be independent N(ξi, 1).(i) The only simultaneous confidence intervals equivariant under G0 are those given by (9.83).(ii) The inequalities (9.83) and (9.85) are equivalent.(iii) Compared with the Scheff´e intervals (9.72), the intervals (9.85) for uj ξj are shorter when
(i) For the confidence sets (9.73), equivariance under G1 and G2 reduces to (9.74) and (9.75) respectively.(ii) For fixed (y1,...,yr), the statements uiyi ∈ A hold for all (u1,...,ur)with u2 i = 1 if and only if A contains the interval I(y) =[−Y 2 i , +Y 2 i ].(iii) Show that the statement
(i) A function L satisfies the first equation of (9.65) for all u, x, and orthogonal transformations Q if and only if it depends on u and x only through ux, xx, and uu.(ii) A function L is equivariant under G2 if and only if it satisfies (9.67).
The Tukey T-method leads to the simultaneous confidence intervals|(Xj· − Xi·) − (µj − µi)| ≤ CS sn(n − 1)for all i, j. (9.102)[The probability of (9.102) is independent of the µ’s and hence equal to 1 − αs.]Section 9.4
Show that the Tukey levels (vi) satisfy (9.54) when s is even but not when s is odd.
Prove Lemma 9.3.3 when s is odd.
In Lemma 9.3.2, show that αs−1 = α is necessary for admissibility.
(i) For the validity of Lemma 9.3.1 it is only required that the probability of rejecting homogeneity of any set containing {µi1 ,...,µiv1 }as a proper subset tends to 1 as the distance between the different groups(9.48) all → ∞, with the analogous condition holding for H2,...,Hr.(ii) The
Show thatr+1 i=1 Yi − Y1 + ··· + Yr+1 r + 1 2−r i=1 Yi − Y1 + ··· + Yr r2≥ 0.
In general, show Cs = C∗1 . In the case s = 2, show (9.27).Section 9.3
Prove part (i) of Theorem 9.2.3.
In general, the optimality results of Section 9.2 require the procedures to be monotone. To see why this is required, consider 9.2.2 (i). Show the procedure E to be inadmissible. Hint: One can always add large negative values of T1 and T2 to the region u1,1 without violating the FWER.
Under the assumptions of Theorem 9.2.1, suppose there exists another monotone rule E that strongly controls the FWER, and such that Pθ{dc 0,0} ≤ Pθ{e c0,0} for all θ ∈ ωc 0,0 , (9.101)with strict inequality for some θ ∈ ωc 0,0. Argue that the ≤ in (9.101) is an equality, and hence
We have suppressed the dependence of the critical constants C1,...,Cs in the definition of the stepdown procedure D, and now more accurately call them Cs,1,...,Cs,s. Argue that, for fixed s, Cs,j is nonincreasing in j and only depends on s − j.
Prove Lemma 9.2.2.
. Suppose (X1,...,Xk)T has a multivariate c.d.f. F(·). For θ ∈RI k , let Fθ(x) = F(x − θ) define a multivariate location family. Show that (9.15)is satisfied for this family. (In particular, it holds if F is any multivariate normal distribution.)
In general, show that FDR ≤ FWER, and equality holds when all hypotheses are true. Therefore, control of the FWER at level α implies control of the FDR.Section 9.2
Rather than finding multiple tests that control the FWER, consider the k-FWER, the probability of rejecting k or more false hypotheses. For a given k, if there are s hypotheses, consider the procedure that rejects any hypothesis whose p-value is ≤ kα/s. Show that the resulting procedure controls
As in Procedure 9.1.1, suppose that a test of the individual hypothesis Hj is based on a test statistic Tn,j , with large values indicating evidence against the Hj . Assume >s j=1 ωj is not empty. For any subset K of {1,...,s}, let cn,K(α, P) denote an α-quantile of the distribution of maxj∈K
Suppose Hi is specifies the unknown probability P belongs to a subset of the parameter space ωi, for i = 1,...,s. For any K ⊂ {1,...,k}, let HK be the intersection hypothesis P ∈ >j∈K ωj . Suppose φK is level α for testing HK. Consider the multiple testing procedures that rejects Hi if
In Example 9.1.4, verify that the stepdown procedure based on the maximum of Xj/√σj,j improves upon the Holm procedure. By Theorem 9.1.3, the procedure has FWER ≤ α. Compare the two procedures in the caseσi,i = 1, σi,j = ρ if i = j; consider ρ = 0 and ρ → ±1.
Under the assumptions of Theorem 9.1.2 and independence of the p-values, the critical values α/(s − i + 1) can be increased to 1 − (1 − α)1/(s−i+1).For any i, calculate the limiting value of the ratio of these critical values, as s → ∞.
Show that, under the assumptions of Theorem 9.1.2, it is not possible to increase any of the critical values αi = α/(s − i + 1) in the Holm procedure (9.6) without violating the FWER.
(i) Under the assumptions of Theorem 9.1.1, suppose also that the p-values are mutually independent. Then, the procedure which rejects any Hi for which ˆpi < c(α, s)=1 − (1 − α)1/s controls the FWER.(i) Compare α/s with c(α, s) and show lims→∞c(α, s)(α/s) = − log(1 − α)α .For
Show the Bonferroni procedure, while generally conservative, can have FWER = α by exhibiting a joint distribution for (ˆp1,..., pˆs) and satisfying(9.4) such that P{mini pˆi ≤ α/s} = α.
is most stringent.
Show that the UMP invariant test of
with each of the sets Ω∆ consisting of two points (ξ1, η1, σ),(ξ2, η2, σ) such thatξ1 = ζ − n m + nδ, η1 = ζ + m m + nδ;ξ2 = ζ + n m + nδ, η2 = ζ − m m + nδfor some ζ and δ.]
Let (Z1,...,ZN )=(X1,...,Xm, Y1,...,Yn) be distributed according to the joint density (5.55), and consider the problem of testing H : η = ξagainst the alternatives that the X’s and Y ’s are independently normally distributed with common variance σ2 and means η = ξ. Then the permutation
Let {Ω∆} be a class of mutually exclusive sets of alternatives such that the envelope power function is constant over each Ω∆ and that∪Ω∆ = Ω − ΩH, and let ϕ∆ maximize the minimum power over Ω∆. If ϕ∆ = ϕis independent of ∆, then ϕ is most stringent for testing θ
there exists a most stringent test for testing θ ∈ ΩH against θ ∈Ω − ΩH.
Existence of most stringent tests. Under the assumptions of
Suppose X1,...,Xk are independent, with Xi ∼ N(θi, 1). Consider testing the null hypothesis θ1 = ··· = θk = 0 against max |θi| ≥ δ, for someδ > 0. Find a maximin level α test as explicitly as possible. Compare this test with the maximin test if the alternative parameter space were i
Suppose X has the multivariate normal distribution in Rk with unknown mean vector h and known positive definite covariance matrix C−1.Consider testing h = 0 versus |C1/2h| ≥ b for some b > 0, where |·| denotes the Euclidean norm.(i) Show the test that rejects when |C1/2X|2 > ck,1−α is
Suppose that the problem of testing θ ∈ ΩH against θ ∈ ΩK remains invariant under G, that there exists a UMP almost invariant test ϕ0 with respect to G, and that the assumptions of Theorem 8.5.1 hold. Then ϕ0 maximizes infΩK [w(θ)Eθϕ(X) + u(θ)] for any weight functions w(θ) ≥
Let X = (X1,...,Xp) and Y = (Y1,...,Yp) be independently distributed according to p-variate normal distributions with zero means and covariance matrices E(XiXj ) = σij and E(YiYj )=∆σij .(i) The problem of testing H : ∆ ≤ ∆0 remains invariant under the group G of transformations X∗ =
Suppose in Problem 8.25(i) the variance σ2 is unknown and that the data consist of X1,...,Xn together with an independent random variable S2 for which S2/σ2 has a χ2-distribution. If K is replaced by θ2 i /σ2 = r2, then(i) the confidence sets (θi − Xi)2/S2 ≤ C are uniformly most
Let X1, ..., Xn be independent normal with means θ1, ..., θn and variance 1.(i) Apply the results of the preceding problem to the testing of H : θ1 = ··· =θn = 0 against K :θ2 i = r2, for any fixed r > 0.(ii) Show that the results of (i) remain valid if H and K are replaced by H :θ2 i
To generalize the results of the preceding problem to the testing of H : f vs. K : {fθ, θ ∈ ω}, assume:(i) There exists a group G that leaves H and K invariant.(ii) G¯ is transitive over ω.(iii) There exists a probability distribution Q over G which is right-invariant in the sense of Section
For testing H : f0 against K : {f1,...,fs}, suppose there exists a finite group G = {g1,...,gN } which leaves H and K invariant and which is transitive in the sense that given fj , fj (1 ≤ j, j) there exists g ∈ G such that gf¯ j = fj . In generalization of Problems 8.21, 8.22, determine a
against the alternatives K ={K1,...,Kn, K1,...,Kn}, where under Ki : ξj = 0 for all j = i, ξi = −ξ, determine the UMP test under a suitable group G, and show that it is both maximin and invariant.[ii): Suppose φ is uniformly at least as powerful as φ0, and more powerful for at least
The UMP invariant test φ0 of Problem 8.21(i) maximizes the minimum power over K;(ii) is admissible.(iii) For testing the hypothesis H of
Let X1, ..., Xn be independent normal variables with variance 1 and means ξ1, ..., ξn, and consider the problem of testing H : ξ1 = ··· =ξn = 0 against the alternatives K = {K1,...,Kn}, where Ki : ξj = 0 for j = i,ξi = ξ (known and positive). Show that the problem remains invariant under
(i) 8 In the preceding problem determine the maximin test ifω is replaced by aiµi ≥d, where the a’s are given positive constants.(ii) Solve part (i) with V ar(Xi) = 1 replaced by V ar(Xi) = σ2 i (known).[(i): Determine the point (µ∗1,...,µ∗n) in ω for which the MP test of H against
Let X1, ..., Xn be independently normally distributed with means E(Xi) = µi and variance 1. The test of H : µ1 = ··· = µn = 0 that maximizes the minimum power over ω :µi ≥ d rejects when Xi ≥ C.[If the least favorable distribution assigns probability 1 to a single point, invariance
Write out a formal proof of the maximin property outlined in the last paragraph of Section 8.3.Section 8.4
Determine whether (8.21) remains the maximin test if in the model (8.20) Gi is replaced by Gij .
Evaluate the test (8.21) explicitly for the case that Pi is the normal distribution with mean ξi and known variance σ2, and when 0 = 1.
Show that if P0 = P1 and 0, 1 are sufficiently small, then Q0 = Q1.
Prove the formula (8.15).
Show that there exists a unique constant b for which q0 defined by (8.11) is a probability density with respect to µ, that the resulting q0 belongs to P0, and that b → ∞ as 0 → 0.
If (8.13) holds, show that q1 defined by (8.11) belongs to P1.
Double-exponential distribution. Let X1, ..., Xn be a sample from the double-exponential distribution with density 1 2 e−|x−θ|. The LMP test for testing θ ≤ 0 against θ > 0 is the sign test, provided the level is of the formα = 1 2nm k=0 n k, so that the level-α sign test is
(i) Let X have binomial distribution b(p, n), and consider testing H : p = p0 at level α against the alternatives ΩK : p/q ≤ 1 2 p0/q0 or≥ 2p0/q0. For α = .05 determine the smallest sample size for which there exists a test with power ≥ .8 against ΩK if p0 = .1, .2, .3, .4, .5.(ii) Let
Let x = (x1,...,xn), and let gθ(x, ξ) be a family of probability densities depending on θ = (θ1,...,θr) and the real parameter ξ, and jointly measurable in x and ξ. For each θ, let hθ(ξ) be a probability density with respect to a σ-finite measure ν such that pθ(x) = gθ(x, ξ)hθ(ξ)
Let fθ(x) = θg(x) + (1 − θ)h(x) with 0 ≤ θ ≤ 1. Then fθ(x)satisfies the assumptions of Lemma 8.2.1 provided g(x)/h(x) is a nondecreasing function of x.
Let Z1,...,Zn be identically independently distributed according to a continuous distribution D, of which it is assumed only that it is symmetric about some (unknown) point. For testing the hypothesis H : D(0) = 1 2 , the sign test maximizes the minimum power against the alternatives K : D(0) ≤
Let the distribution of X depend on the parameters (θ, ϑ) =(θ1,...,θr, ϑ1,...,ϑs). A test of H : θ = θ0 is locally strictly unbiased if for each ϕ, (a) βϕ(θ0, ϕ) = α, (b) there exists a θ-neighborhood of θ0 in whichβϕ(θ, ϑ) > α for θ = θ0.(i) Suppose that the first and second
The following two examples show that the assumption of a finite sample space is needed in Problem 8.4.(i) Let X1, ..., Xn be i.i.d. according to a normal distribution N(σ, σ2) and test H : σ = σ0 against K : σ>σ0.(ii) Let X and Y be independent Poisson variables with E(X) = λ and E(Y ) =λ +
Locally uniformly most powerful tests. If the sample space is finite and independent of θ, the test ϕ0 of Problem 8.2(i) is not only LMP but also locally uniformly most powerful (LUMP) in the sense that there exists a value∆ > 0 such that ϕ0 maximizes βϕ(θ) for all θ with 0 < θ − θ0 <
A level-α test ϕ0 is locally unbiased (loc. unb.) if there exists∆0 > 0 such that βϕ0 (θ) ≥ α for all θ with 0 < d(θ) < ∆0; it is LMP loc. unb. if it is loc. unb. and if, given any other loc. unb. level-α test ϕ, there exists ∆ such that (8.38) holds. Suppose that θ is
Locally most powerful tests. 5 Let d be a measure of the distance of an alternative θ from a given hypothesis H. A level-α test ϕ0 is said to be locally most powerful (LMP) if, given any other level-α test ϕ, there exists ∆such thatβϕ0 (θ) ≥ βϕ(θ) for all θ with 0 < d(θ) < ∆.
Existence of maximin tests.4 Let (X , A) be a Euclidean sample space, and let the distributions Pθ, θ ∈ Ω, be dominated by a σ-finite measure over (X , A). For any mutually exclusive subsets ΩH, ΩK of Ω there exists a level-αtest maximizing (8.2).[Let β = sup[infΩk Eθϕ(X)], where
Suppose (X1,...,Xp) have the multivariate normal density(7.51), so that E(Xi) = ξi and A−1 is the known positive definite covariance matrix. The vector of means ξ = (ξ1,...,ξp) is known to lie in a given s-dimensional linear space ΠΩ with s ≤ p; the hypothesis to be tested is that ξ
Bayes character and admissibility of Hotelling’s T 2.(i) Let (Xα1,...,Xαp), α = 1, . . . , n, be a sample from a p-variate normal distribution with unknown mean ξ = (ξ1,...,ξp) and covariance matrixΣ = A−1, and with p ≤ n − 1. Then the one-sample T 2-test of H : ξ = 0 against K : ξ
Extend the one-sample problem to the two-sample problem for testing whether two multivariate normal distributions with common unknown covariance matrix have the same mean vectors.
For testing a multivariate mean vector ξ is zero in the case whereΣ is known, derive a UMPI test.
The confidence ellipsoids (7.59) for (ξ1,...,ξp) are equivariant under the group of Section 7.9.
Verify that the density of W is given by (7.55).
Show that the statistic W given in (7.55) is maximal invariant.[Hint: If (X,S ¯ ) and (Y,T ¯ ) are such that X¯ T S−1 X¯ = Y¯ T T −1 Y , ¯then a transformation C that transforms one to the other is given by C =Y (XT S−1X)−1XT S−1.]
If n ≤ p, the matrix S with (i, j) component Si,j defined in(7.53) is singular. If n>p, it is nonsingular with probability 1. If n ≤ p, the test φ ≡ α is the only test that is invariant under the group of nonsingular linear transformations.
for suitable values of ρ1 and ρ2.Section 7.9
Let (X1j1,...,X1jn; X2j1,...,X2jn; ... ; Xaj1,...,Xajn), j =1,...,b, be a sample from an an-variate normal distribution. Let E(Xijk) =ξi, and denote by ii the matrix of covariances of (Xij1,...,Xijn) with(Xij1,...,Xijn). Suppose that for all i, the diagonal elements of ii are = τ 2 and the
Among all tests that are both unbiased and invariant under suitable groups under the assumptions of Problem 7.35, there exist UMP tests of(i) H1 : α1 = ··· = αa = 0;(ii) H2 : σ2 B/(nσ2 C + σ2) ≤ C;(iii) H3 : σ2 C /σ2 ≤ C.Note. The independence assumptions of Problems 7.35 and 7.36
suggests the mixed model Xijk = µ + αi + Bj + Cij + Uijk with the B’s, C’s, and U’s as in Problem 7.34. Reduce this model to a canonical form involving X··· and the sums of squares (Xi··−X···−αi)2 nσ2 C +σ2 , (X·j·−X···)2 anσ2 B+nσ2 C +σ2 ,
Formal analogy with the model of
leads to the model Xijk = µ + Ai + Bj + Cij + Uijk (i = 1,...,a; j = 1, . . . ,b, k = 1,...,n).where the A’s, B’s, C’s, and U’s are independent normal with mean zero and variances σ2 A, σ2 B, σ2 C and σ2.(i) Give an example of a situation in which such a model might be appropriate.(ii)
Permitting interactions in the model of
if ρ = σ2 b /(σ2 b +σ2), except that instead of being positive, ρ now only needs to satisfy ρ > −1/(p − 1).]
Under the assumptions of the preceding problem, determine the UMP invariant test (with respect to a suitable G) of H : ξi = ... = ξp.[Show that this model agrees with that of
Let (X1j ,...,Xpj ), j = 1,...,n, be a sample from a p-variate normal distribution with mean (ξ1,...,ξp) and covariance matrix Σ = (σij ), where σ2 ij = σ2 when j = i, and σ2 ij = ρσ2 when j = i. Show that the covariance matrix is positive definite if and only if ρ > −1/(p − 1).[For
and the α’s are constants adding to zero, determine (with respect to a suitable group leaving the problem invariant)(i) a UMP invariant test of H : α1 = ··· = αa;(ii) a UMP invariant test of H : ξ1 = ··· = ξa =0 (ξi = µ + αi);(iii) a test of H : σ2 B/σ2 ≤ δ which is both UMP
For the mixed model Xij = µ + αi + Bj + Uij (i = 1,...,a; j = 1,...,n), where the B’s and U’s are as in
Consider the additive random-effects model Xijk = µ + Ai + Bj + Uijk (i = 1,...,a; j = 1,...,b; k = 1,...,n), where the A’s, B’s, and U’s are independent normal with zero means and variances σ2 A, σ2 B, and σ2’ respectively. Determine(i) the joint density of the X’s,(ii) the UMP
Under the assumptions of the preceding problem, the null distribution of W∗ is independent of q and hence the same as in the normal case, namely, F with r and n−s degrees of freedom. [See Problem 5.11]. Note. The analogous multivariate problem is treated by Kariya (1981); also see Kariya
Consider the following generalization of the univariate linear model of Section 7.1. The variables Xi (i = 1,...,n) are given by Xi = ξi + Ui, where (U1,...,Un) have a joint density which is spherical, that is, a function of n i=1 u2 i , say f(U1,...,Un) = qU2 i.The parameter spaces ΠΩ and
Consider the mixed model obtained from (7.68) by replacing the random variables Ai by unknown constants αi satisfying αi = 0. With (ii)replaced by (ii)α2 i /(nσ2 C +σ2), there again exist tests which are UMP among an tests that are invariant and unbiased, and in cases (i) and (iii) these
Consider the model II analogue of the two-way layout of Section 7.5, according to which Xijk = µ + Ai + Bj + Cij + Eijk (7.68)(i = 1,...,a; j = 1,...,b; k = 1,...,n)where the Ai, Bj , Cij , and Eijk are independently normally distributed with mean zero and with variances σ2 A, σ2 B, σ2 C and
The general nested classification with a constant number of observations per cell, under model II, has the structure Xijk··· = µ + Ai + Bij + Cijk + ··· + Uijk···, i = 1,...,a; j = 1,...,b; k = 1,...,c; ....(i) This can be reduced to a canonical form generalizing (7.45).(ii) There exist
If Xij is given by (7.39) but the number ni of observations per batch is not constant, obtain a canonical form corresponding to (7.40) by letting Yi1 = √niXi·. Note that the set of sufficient statistics has more components than when ni is constant.
The tests (7.46) and (7.47) are UMP unbiased.
Showing 1600 - 1700
of 5757
First
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Last
Step by Step Answers