New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
statistical sampling to auditing
Testing Statistical Hypotheses 2nd Edition E. L. Lehmann - Solutions
4. Let the distribution of X depend on the parameters «(J, ii) = «(JI' . . . , (Jr' iiI' ... , iis)' A test of H : (J = (J0 is locally strictly unbiased if for each ii, (a) fJ",«(Jo, ii) =a, (b) there exists a (J-neighborhood of (J0 in which fJ",«(J, ii) > a for (J ", (J0. (i) Suppose that the
3. A level-a test CPo is locally unbiased (loc. unb.) if there exists Ao > 0 such that P and if, given any other loc. unb. level-a test
2. Locally most powerful tests. Let d be a measure of the distance of an alternative 8 from a given hypothesis H. A level-a test CPo is said to be locally most powerful (LMP) if, given any other level-a test cP , there exists A such that (37) P"'o( 8) P",( 8) for all 8 with 0 < d(8) < A. Suppose
1. Existence of maximin tests. Let (.¥, .91) be a Euclidean sample space, and let the distributions Pe, 8 E G, be dominated by a a-finite measure over (.¥, .91). For any mutually exclusive subsets GH , GK of G there exists a level-a test maximizing (2). [Let p = sup[infoKEecp( X»), where the
47. Bayes character and admissibilityof Hotelling's r: (i) Let (X"I"'" X"p), a = 1, . . . ; n, be a sample from a p-variate normal distribution with unknown mean = (~I" .. , ~p) and covariance matrix = A-I, and with p n - 1. Then the one-sample r 2 -test of H : = 0 against K: "* 0 is a Bayes test
46. The UMP invariant test of independence in part (ii) of the preceding problem is asymptotically robust against nonnormality.
45. Testing for independence. Let X = (Xa;). i = 1•. .. , P. a = 1,. . . , N. be a sample from a p-variate normal distribution; let q < p , max(q, P - q) N; and consider the hypothesis H that (Xli' . . .• Xl q ) is independent of (XI q+ I • ... ,Xlp)' that is. that the covariances ajj = E(Xaj
44. In generalization of Problem 8 of Chapter 7. let (Xvi' . ..• Xvp)' v = 1, . . . • n. be independent normal p-vectors with common covariance matrix and with means s " - '\' a a(i) lib,,; - i...J vjP} , j -I where A = (a.) is a constant matrix ofrank s and where the fJ's are unknown
43. Consider the third of the three sampling schemes for a 2 X 2 X K table discussed in Chapter 4, Section 8. and the two hypotheses HI: iii = . .. = Ii K = 1 and Hz: iii = . . . = Ii K • (i) Obtain the likelihood-ratio test statistic for testing HI' (ii) Obtain equations that determine the
42. In the situation of the preceding problem. consider the hypothesis of marginal homogeneity H' : Ps» = P+ j for all i, where P;+ = Li=IPij' P+j = Li- IPjj' (i) !he maximum-likelihood estimates of the pjj under H' are given by Pij = nj}!(l + >.o j - >.0). where the >.o's are the solutions of the
41. The hypothesis of symmetry in a square two-way contingency table arises when one of the responses AI " ' " Au is observed for each of N subjects on two occasions (e.g. before and after some intervention). If nij is the number of subjects whose responses on the two occasions are (A ;, A), the
40. In the situation of Example 7, consider the following model in which the row margins are fixed and which therefore generalizes model (iii) of Chapter 4, Section 7. A sample of n ; subjects is obtained from class Ai (i = 1, . . . , a), the samples from different classes being independent. If nij
39. In Example 7, show that the maximum-likelihood estimators Pij' Pi' and P; are as stated.
38. In the multinomial model (38), the maximum-likelihood estimators Pi of the P's are Pi = x Jn. [The following are two methods for proving this result: (i) Maximize log P( XI' . . . , x m ) subject to 'E. Pi = 1 by the method of undetermined multipliers. (ii) Show that npt· n(xJny' by
37. Let the equation of the tangent !J at 7T be Pi = 7T;(1 + ail~1 + ... +a s~s)' and suppose that the vectors (a;I" ' " a;s) are orthogonal in the sense that 'E.aikai/7Ti = 0 for all k if: I. 8.9] (i) If (~I" ' " t) minimizes 'E.( Pi - Pi)2/7Ti subject to P E !J, 'E.,«,pJ'E.;a}/TT; . (ii) The
36. Let Xl •. ..• x" be i.i.d. with cumulative distribution function F. let al < ' " < am -I be any given real numbers. and let ao = - 00, am = 00 . If "; is the number of X's in (aj_l , aj), the X2-test (43) can be used to test H : F = Fo with 'fTj = Fo(a j) - Fo(a j_l ) for i = 1. .. . , m.
35. The problem of testing the hypothesis H : 1/ E TIw against 11 E TIo- w ' when the distribution of Y is given by (34), remains invariant under a suitable group of linear transformations, and with respect to this group the test (35) is UMP invariant. The power of this test is given by (37) for
34. Consider the s-sample situation in which (x:.tl, .. ..x:.;l), v = 1•. . .• nk' k = 1, ... •s, are independent normal p-vectors with common covariance matrix I and with means (~lk) •. . .• ~~k). Obtain as explicitly as possible the smallest simultaneous confidence sets for the set of
33. Write the simultaneous confidence sets (23) as explicitly as possible for the following cases: (i) The one-sample problem of Section 3 with 1/; = (i = 1, . . . , p). (ii) The two-sample problem of Section 3 with 1/j = ~j2) - ~:1).
32. Under the assumptions made at the beginning of Section 6, show that the confidence intervals (33) (i) are uniformly most accurate unbiased. (ii) are uniformly most accurate equivariant, and (iii) determine the constant ko.
31. Prove that each of the sets of simultaneous confidence intervals (29) and (31) is smallest among all families that are equivariant under a suitable group of transformations.
30. The only simultaneous confidence sets for all U'1/v. U E U, v that are equivariant under the groups GI-G3 of the text are those given by (28). 494
29. Consider the special case of the preceding problem in which a = b = 1, and let V' = u' = (UI" ' " U.), V' = v' = (VI"' " vp). Then for testing Ho : u'1J*v = 0 there exists a UMP invariant test which rejects when u'y*v/(v'Sv)u'u c.
28. Let X be an n X p data matrix satisfying the model assumptions made at the beginning of Sections 1 and 5, and let X" = ex, where e is an orthogonal matrix, the first s rows of which span TIo. If y* and Z denote respectively the first s and last n - s rows of X*, then E(Y*) = 1J* say, and E(Z) =
27. As a different generalization, let (XA,.I" . . , XA,.p) be independent vectors, each having a p-variate normal distribution with common covariance matrix and with expectation E( XA,,; ) = p.(i) + a~l + fJ;i) , L a~i) = L fJ;i) = 0 x for all i , and consider the hypothesis that each of p.U),
26. Generalize both parts of the preceding problem to the two-group case in which Xm (A = 1, . . . , nl) and Xm (p - 1, . .. , n2) are nl + n2 independent vectors, each having an ab-variate normal distribution with covariance matri and with means given by 493 E( ..¥1 Jl ) = II. + a(l) + fJ(l) AI]
25. Let XV;} ( i = 1, . .. , a; j = 1, . . . , b), v = 1, .. . , n, be n independent vectors, each having an ab-variate normal distribution with covariance matrix and with means given by E(X,,;J = p. +a, + {3j' La; = L{3j = O. (i) For testing the hypothesis H : al = '" = au = 0, give explicit
24. The assumptions of Theorem 6 of Chapter 6 are satisfied for the group (19) applied to the hypothesis H: '1/ = 0 of Section 5.
23. The probability of a type-I error for each of the tests of the preceding problem is robust against nonnormality: in case (i) as b ..... 00 ; in case (ii) as mb ..... 00 ; in case (iii) as m ..... 00.
22. Give explicit expressions for the elements of V and S in the multivariate analogues of the following situations: (i) The hypothesis (34) in the two-way layout (32) of Chapter 7. (ii) The hypothesis (34) in the two-way layout of Section 6 of Chapter 7. (iii) The hypothesis H' : Y;j = 0 for all
21. Let (X~t), . .. , x;,~», a = 1, . . . , nk' k = 1, .. . , s, be samples from p-variate d· ibuti F( /:(k) /:(k» ith fi . . . d istn uuons XI - "'I , • • • , xp - "'p Wl mte covanance matnx ~, an let AI"' " Au be the nonzero roots of (16) and (At, ... , A~) those of (17), with V and S
20. Let (X"I" ' " X"p), a = 1,... , n, be independently distributed according to p-variate distributions F( X"I - ~"l . . , x"p - t p ) with finite covariance matrix ~, and suppose the es satisfy the linear model assumptions of Section 1. Then under H, S;j(n - s) tends in probability to the (ij)th
19. (i) If (13) has only one nonzero root, then B is of rank 1. In canonical form B=17/, and there then exists a vector (a1, .. . ,ap ) and constants C1' .• • , C" such that (65) (7/,,1 " '" 7/"p) = C,,( a\, ... , ap) for v = 1, .. . , r. (ii) For the s-sample problem considered in Section 4,
18. Under the assumptions of Problem 17, show that 1 IVI n 1 + A; = IV + SI . [The determinant of a matrix is equal to the product of its characteristic roots.]
17. Let V and S be p X P matrices, V of rank a .s; p and S nonsingular, and let A\, .. . , Aa denote the nonzero roots of IV - ASI = O. Then (i) 1-'; = 1/(1 + A;), i = 1, . . . ,a, are the a smallest roots of ( 63) IS - 1-'( V + S)I = 0 (the other p - a being = 1); (ii) o, = 1 + A; are the a
16. Verify the elements of V and S given by (14) and (15). \
15. Suppose X;'I = "; + 0.", where the t ,; are given by (62) and where (l!"I' . .. ' o.'p)' v = 1, . . . , n, is a sample from a p-variate distribution with mean 0 and covariance matrix I . The size of the test of Problem 13 is robust for this model as n -+ 00 . [Apply Problem 14 and the
14. Let (1';'1 " , . , 1';'p)' v = 1, .. . , n, be a sample from a p-variate distribution F wit. h mean d covari . d I Z(II) - ~II Y / zero an covanance matnx ..., an et ; - "-, ,_\cl , /' ; V"-I,=IC" for some sequence of constants cl ' c2" " . Then (ZI"), . . . , Z~"» tends in law to N(O, I)
13. Simple multivariate regression. In the model of Section 1 with (62) '; = a; + fJ;tl , (v=I, .. . ,n ; i=I, .. . ,s), the UMP invariant test of H :fJl = . .. = fJp = 0 is given by (6) and (9), with n Y; = P; , S;; = L [XI'; - /X; - P;tl,J[ X,,; - /X; - Pit,,] v-I A _ I ( -)2 A A - where fJ; =
12. Inversion of the two-sample test based on (12) leads to confidence ellipsoids for the vector (WI - ~p I, .. . , ~~2) - ~~l I) which are uniformly most accurate equivariant under the groups G1-G3 of Section 2.
11. The two-sample test based on (12) is robust against heterogeneity of covariances as nl and n2 -> 00 when nl/n2 -> 1, but not in general.
10. The two-sample test based on (12) is robust against nonnormality as nl and n2 -> 00.
9. The confidence ellipsoids (11) for al"' " ~p) are equivariant under the groups G1-G3 of Section 2.
8. Let (Xal , . . • , Xa p )' a = 1, ... , n, be a sample from any p-variate distribution with zero mean and finite nonsingular covariance matrix I . Then the distribution of T 2 defined by (10) tends to X2 with p degrees of freedom.
7. Null distribution of Hotelling's T 2• The statistic W = YS- IY/ defined by (6). where Y is a row vector. has the distribution of a ratio. of which the numerator and denominator are distributed independently, as noncentral X2 with noncentrality parameter 1/12 and p degrees of freedom and as
6. Let Z be the m X p matrix (Zai)' where p ;5; m and the Zai are independently distributed as N(O.I). let S = Z/Z. and let SI be the matrix obtained by omitting the last row and column of S. Then the ratio of determinants ISVISd has a x 2-distribution with m - p + 1 degrees of freedom. [Let q be
5. Let Z,,; (IX = 1, . .. , m; i = 1, .. . , p) be independently distributed as N(O,1), and let Q = Q(Y) be an orthogonal m X m matrix depending on a random variable Y that is independent of the Z 's, If Z;; is defined by ( ZD . .. Z:'i) = ( Zli . .. Zmi)Q/• 489 then the Z;; are independently
4. In the case r = 1, the statistic W given by (6) is maximal invariant under the group induced by G, and G3 on the statistics Y;, Va; (i = 1, . . . , p; IX = 1, . . . , S - 1), and S = Z'Z. [There exists a nonsingular matrix B such that B'SB = I and such that only the first coordinate of YB is
3. (i) If A and Bare k X m and m X k matrices respectively, then the product matrices AB and BA have the same nonzero characteristic roots. (ii) This provides an alternative derivation of the fact that W defined by (6) is the only nonzero characteristic root of the determinantal equation (5). [(i):
2. [(ii): The V's are eliminated through G,. Since the r + m row vectors of the matrices Y and Z may be assumed to be linearly independent, any such set of vectors can be transformed into any other through an element of G3 .] 2. (i) If p < r + m, and V = Y'Y, S = Z'Z, the p X P matrix V + S is
1. (i) If m < p , the matrix S, and hence the matrix S/m (which is an unbiased estimate of the unknown covariance matrix of the underlying p-variate distribution), is singular. If m p, it is nonsingular with probability 1. (ii) If r + m 5 p, the test q,(y, u, z) E IX is the only test that is
71. In the regression model of Problem 8, generalize the confidence bands of Example 12 to the regression surfaces (i) ht(et,· ··,es ) = Ej-IeA; (ii) h2(e2,. .. , es ) = PI + Ej_2ejPj •
70. In generalization of Problem 66, show how to extend the Dunnett intervals of Problem 69 to the set of all contrasts. [Use the fact that the event Iy; - Yol ::; fj. for i = 1, . . . ,s is equivalent to the event It;_oc;y;\ s AE;_dc;1 for all (co, ... , c.) satisfying t;_oci = 0.) Note. As is
69. Dunnett's method. Let XOj (j = 1, .. . , m) and X;k (i = 1,. .. , s; k = 1, . . . , n) represent measurements on a standard and s competing new treatments, and suppose the X's are independently distributed as Nao, 0 2) and Na,.0 2 ) respectively. Generalize Problems 65 and 67 to the problem of
68. Construct an example [i.e., choose values nl = . .. = ns = n and a and a particular contrast (cl , .. . , cs ») for which the Tukey confidence intervals (121) are shorter than the Scheffe intervals (93), and an example in which the situation is reversed.
67. (i) Let X;j (j = 1, . . . , n; i = 1, .. . , s) be independent N( t, 02), 0 2 unknown. Then the problem of obtaining simultaneous confidence intervals for all differences - ~i is invariant under Go, G2 , and the scale changes G3• (ii) The only equivariant confidence bounds based on the
66. In the preceding problem consider arbitrary contrasts [c;~; , [c; = O. The event (120) I( x, - X;) - (~j - ;) I £\ for all i *" j is equivalent to the event £\ (121) ILc;X; - Lc;~;1 s 2" Lkl for all c with Lc, = 0, which therefore also has probability y. This shows how to extend the Tukey
65. Tukey :s T-Method. Let X; (i = 1, .. . , r) be independent N(L I), and consider simultaneous confidence intervals (116) L[(i,j) ; x) s ~j - t s M[(i ,j); x) for all i *" j . The problem of determining such confidence intervals remains invariant under the group Go of all permutations of the X's
64. Let (X\ j\" " ,X\jn; X2j\" " ,X2jn ; . .. ; X"j\, ,, ,,X"jn),j=l, .. . ,b,bea sample from an-variate normal distribution. Let E(X;jk) = i' and denote by ~;; the matrix of covariances of (X;j\ ' " '' X;jn) with (X;'j\' " '' X;'jn)' Suppose that for all i, the diagonal elements of ii are = -r 2
63. Among all tests that are both unbiased and invariant under suitable groups under the assumptions of Problem 62, there exist UMP tests of (i) HI : a\ = . . . = a" = 0; (ii) H2 + ( 2 ) s C; (iii) H3 : a2/a2 s C. Note. The independence assumptions of Problems 62 and 63 often are not realistic. For
62. Formal analogy with the model of Problem 61 suggests the mixed model X; jk = P. + a i + Bj + Cij + ii, with the B's, C's, and U's as in Problem 61. Reduce this model to a canonical form involving X... and the sums of squares L(X;..- X.••- a;)2 nai + a2 LL(X;j'- X; ..- X.j.+ x..l nai + a2 L
61. Permitting interactions in the model of Problem 57 leads to the model X;jk = J.L + Ai + Bj + Cij + U;jk (i = 1, ... , a; j = 1, ... , b; k = 1, . . . , n) .where the A's, B's, C's, and U's are independent normal with mean zero and variances a}, a~, ai, and a2• (i) Give an example of a
60. Under the assumptions of the preceding problem, determine the UMP invariant test (with respect to a suitable G) of H: ~l = .. . = ~p" [Show that this model agrees with that of Problem 58 if p = al!( a1 + a2 ), except that instead of being positive, p now only needs to satisfy p > -l/(p - 1).]
59. Let (Xl)" .. , Xpj ) , j = 1, .. . , n, be a sample from a p-variate normal distribution with mean -l/(p - 1). [For fixed a and p < 0, the quadratic form (1/a2)LLaijYiJ'j = LY? + PLLYiJ'j takes on its minimum value over LY? = 1 when all the y's are equal.]
58. For the mixed model Xi} = J.L +a, + + U;j (i=l, ... ,a; j=l, . . . ,n), where the B's and u's are as in Problem 57 and the a's are constants adding to zero, determine (with respect to a suitable group leaving the problem invariant) (i) a UMP invariant test of H: al = = au; (ii) a UMP invariant
57. Consider the additive random-effects model X jk = J.L + Ai + Bj + U;jk (i=l,. ..,a; j=l, .. . ,b; k=l, . . . ,n), where the A's, B's, and U's are independent normal with zero means and variances a], aJ, and a2 respectively. Determine (i) the joint density of the X's, (ii) the UMP unbiased test
56. Under the assumptions of the preceding problem. the null distribution of W· is independent of q and hence the same as in the normal case. namely. F with r and n - s degrees of freedom. [See Chapter 5, Problem 24]. Note. The analogous multivariate problem is treated by Kariya (1981). who also
55. Consider the following generalization of the univariate linear model of Section 1. The variables (i = 1•... , n) are given by X, = ~i + U;, where (UI,···. Un) have a joint density which is spherical, that is. a function ofE7_luf. say /(UI, .. .. lJ,,) = q([U;2). The parameter spaces IIo
54. Consider the mixed model obtained from (115) by replacing the random variables Ai by unknown constants ai satisfying Eo, = O. With (ii) replaced by (ii') Eaf/(na'E + ( 2), there again exist tests which are UMP among all tests that are invariant and unbiased, and in cases (i) and (iii) these
53. Consider the model II analogue of the two-way layout of Section 6, according to which constant (which may be zero): (i) ol/o2; (ii) o}/(nol + ( 2); (iii) aJ/(nol + ( 2). Note that the test of (i) requires n > 1, but those of (ii) and (iii) do not. [Let S} = nbE(~ ..- x...)2. SJ = naE(X.j.-
52. The general nested classification with a constant number of observations per cell, under model II, has the structure X; jk . .. = P. + Aj + Bij + Cjj k + ... + U;jk .. . , i = 1, .. . , a; j = 1, .. . , b; k = 1, ... , c; .. . . (i) This can be reduced to a canonical form generalizing (101).
51. If X;j is given by (95) but the number n, of observations per batch is not constant, obtain a canonical form corresponding to (96) by letting Y;1 = F: X; • . Note that the set of sufficient statistics has more components than when n, is constant.
50. The tests (102) and (103) are UMP unbiased.
49. In the model (95), the correlation coefficient p between two observations X;j ' X;k belonging to the same class, the so-called intraclass correlation coefficient, is given by p = a]/(a] + a2 ).
48. (i) The test (97) of H: A s Ao is UMP unbiased. (ii) Determine the UMP unbiased test of H : A = Ao and the associated uniformly most accurate unbiased confidence sets for A. 438
47. (i) In Example 10, the simultaneous confidence intervals (89) reduce to (93). (ii) What change is needed in the confidence intervals of Example 10 if the v's are not required to satisfy (92), i.e. if simultaneous confidence inter- vals are desired for all linear functions v,, instead of all
46. (ii) The most general confidence sets (87) which are equivariant under G, G, and G3 are of the form (88). (i) In Example 11, the set of linear functions w,a, w, (,.-..) for all w can also be represented as the set of functions w,,. for all w satisfying w, -0. (ii) The set of linear functions
45. (i) The confidence intervals L(u; y, S) =u,y,c(S) are equivariant un- der G, if and only if L(u; by, bS) = bL(u; y, S) for all b> 0.
44. Let X, (i=1,.,r) be independent N(,, 1). (i) The only simultaneous confidence intervals equivariant under Go are those given by (80). (ii) The inequalities (80) and (82) are equivalent. (iii) Compared with the Scheff intervals (69), the intervals (82) for u,&, are shorter when u,,-, and longer
43. (i) A function L is equivariant under G2 if and only if it satisfies (64). For the confidence sets (70), equivariance under G and G reduces to (71) and (72) respectively. => (ii) For fixed (y,,,), the statements u, y, A hold for all (u,...) with Eu = 1 if and only if A contains the interval (y)
42. (i) A function L satisfies the first equation of (62) for all u, x, and orthogonal transformations Q if and only if it depends on u and x only through u'x, x'x, and u'u. (ii)
41. Give an example of an analysis of covariance (46) in which (56) does not hold but the level of the F-test of H: a l = . . . = ab is robust against nonnormality.
40. Show how to weaken (56) if a robustness condition is required only for testing a particular subspace TI", of TIu. [Suppose that TI", is given by PI = . . . = P, = 0, and use (54).]
39. Show that E7-1n;; = s. [Since the TI;; are independent of A, take A to be orthogonal.]
38. If~ = a + Pt;+ YU;, express the condition (56) in terms of the t's and u's.
37. (i) Under the assumptions of Problem 30, express the condition (56) in terms of the t's. (ii) Determi-ie whether the condition of part (i) is equivalent to (51).
36. Let CII = Uo + U1n + ... + Uk nk , U; 0 for all i. Then c; satisfies (56). [Apply Problem 35 with c~ = nk .]
35. Let {clI } and { 00 . Then {cn } satisfies (56) if and only if {c:.} does.
34. Suppose (56) holds for some particular sequence TI~n) with fixed s. Then it holds for any sequence TI~n) c TIbn) of dimension s' < s. [If TIu is spanned by the s columns of A, let TID be spanned by the first s' columns of A.J
33. In the two-way layout of the preceding problem give examples of submodels TIUl and TIll) of dimensions S1 and S2, both less than ab, such that in one case the condition (56) continues to require nij -> 00 for all i and j but becomes a weaker requirement in the other case.
32. Let ~jk (k=1, . . . ,n;j; i = 1, . .. ,a; j=1, . . . ,b) be independently normally distributed with mean E(~jd = e;j and variance C1 2• Then the test of any linear hypothesis concerning the e;j has a robust level provided n;j -> 00 for all i and j.
31. Verify the claims made in Example 8.
30. Let Xl" ' " Xn be independently normally distributed with common variance 0 2 and means t = a + Pt; + rt;, where the t, are known. If the coefficient vectors (tt , . . . , t:), k = 0,1,2, are linearly independent, the parameter space TIo has dimension s = 3, and the least-squares estimatesa,
29. Let Xl • •. . ' Xm ; Yl , . 0" y" be independently normally distributed with common variance 0 2 and means E(X;) = a + P(u; - u), E(lj) = r + 8(vj - 0), where the u's and v's are known numbers. Determine the UMP invariant tests of the linear hypotheses H: P = 8 and H: a = v, P= 8.
28. In a regression situation, suppose that the observed values and lj of the independent and dependent variable differ from certain true values Xl and lj' by errors 1l.J, V; which are independently normally distributed with zero means and variances 0& and o~o The true values are assumed to satisfy
27. In the three-factor situation of the preceding problem, suppose that a = b = m. The hypothesis H can then be tested on the basis of m2 observations as follows. At each pair of levels (i, j) of the first two factors one observation is taken, to which we refer as being in the i th row and the j
26. Let X;jk (i = 1, . . . , a; i - 1, ... , b; k = 1, . . . , m) be independently normally distributed with common variance a2 and mean E( X;jk) = II- + a; + Pj + Yk (La; = LPj = LYk = 0). Determine the linear hypothesis test for testing H: al = . . . = a Q = O
25. Let X>, denote a random variable distributed as noncentral X2 with f degrees of freedom and noncentrality parameter '),,2. Then X>,, is stochastically larger than X>, if A < A'. [It is enough to show that if Y is distributed as N(O,I), then (Y + A')2 is stochastically larger than (Y + A)2. The
24. The size of each of the following tests is robust against nonnormality: (i) the test (35) as b --+ 00, (ii) the test (37) as mb --+ 00 , (iii) the test (39) as m --+ 00. Note. Nonrobustness against inequality of variances is discussed in Brown and Forsythe (1974a).
23. In the two-way layout of Section 6 with a = b = 2, denote the first three terms in the partition of EEE(X;jk - X;j.)2 by S;, SJ, and SiB' corresponding to the A, B, and AB effects (i.e. the a's, {J's, and y's), and denote by HA , HB , and HA R the hypotheses of these effects being zero. Define
Showing 2000 - 2100
of 4976
First
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
Last
Step by Step Answers