New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
statistical sampling to auditing
Testing Statistical Hypotheses 2nd Edition E. L. Lehmann - Solutions
22. The linear-hypothesis test of the hypothesis of no interaction in a two-way layout with m observations per cell is given by (39).
21. The Tukey T-method leads to the simultaneous confidence intervals (114) cs I( Aj.- X;.) - (JLj - JL;) Is 'sn( n - r for all i, j .[The probability of (114) is independent of the JL'S and hence equal to 1 - as.
20. Show that the Tukey levels (vi) satisfy (29) when s is even but not when s is odd .
19. Prove Lemma 2 when s is odd.
18. In Lemma 1, show that as - \ = at is necessary for admissibility.
17. (i) For the validity of Theorem 1 it is only required that the probability of rejecting homogeneity of any set containing {JL; , . ••, JL ; } as a proper , v, subset tends to 1 as the distance between the different groups (26) all -+ 00, with the analogous condition holding for H5., ...,
16. Show that r+l ( y, + .. . + Y.)2 r ( y, + ... + Y.)2 L Y - \ r+ \ _ L Y _ 1 r > 0 ; - 1 I r + 1 i-I I r - .
15. (i) If XI" '" Xn is a sample from a Poisson distribution with mean E(X;) = ).., then {n (IX - IX) tends in law to N(O,~) as n -> 00 . (ii) If X has the binomial distribution b(p, n), then {n[arcsinJX/n - arcsin{i] tends in law to N(O,nas n -> 00 . (iii) If (XI' YI ) , . . . ,(Xn , y") is a
14. Let 2 1" " , Z, be independently distributed as N(r;, a;), i = 1, .. . , s, where the 0 ; are known constants. (i) With respect to a suitable group of linear transformations there exists a UMP invariant test of H: t1 = .. . = ts given by the rejection region (21). (ii) The power of this test is
13. If the variables Xi} (j = 1, .. . , n;; i = 1, .. . , s) are independently distributed as N(p.;, ( 2 ), then E[En;(X;.- x.i] = (s - 1)02 + En;(p.; - p.i, E[EE(X;j - x;i] = (n - S)02 .
12. Under the assumptions of the preceding problem suppose that E(X;) = t = Ej_lo;A, E(Y,) = 1/;= Ej-1b;A with the n X s matrices A = (a ;j) and B = (b; j) of rank s. Then the experiment based on the Y, is more informative than that based on the X; if and only if B'B - A'A is nonnegative definite.
11. Consider two experiments with observations (Xl" '" Xn ) and (YI , · .. , Y,,) respectively, where the X; and Y; are independent normal with variance a2 = 1 and means E(X;) = c;8;, E(Y;) = 8;. Then the experiment based on the Y; is more informative than that based on the Xj if and only if Ic;1
10. Let XI" ' " x" be independently normally distributed with known variance aJ and means E( X;) = j' and consider any linear hypothesis with s n (instead of s < n which is required when the variance is unknown). This remains invariant under a subgroup of that employed when the variance was
9. Let X; j (j= 1, .. . ,mj ) and Y;k (k= 1,. .. ,n;) be independently normally distributed with common variance a2 and means E(X;j) = and E(Y;) = t + !!.. . Then the UMP invariant test of H :!!.. = 0 is given by (110) with 8 = !!.., 80 = 0 and m ; n; L X;j + E (Y;k - 8) ,;_1 k -l N; t=-- m ·n L
8. Under the assumptions of Section 1 suppose that the means t are given by s ~; = L a;/Ji, j-l where the constants aij are known and the matrix A = (aij) has full rank, and where the fJj are unknown parameters. Let 8 = Ej_,e)Jj be a given linear combination of the fJj . (i) If Pi d~notes the
7. Given any 1/12 > 0, apply Theorem 9 and Lemma 3 of Chapter 6 to obtain the F-test (7) as a Bayes test against a set 0' of alternatives contained in the set 0< 1/1::::; 1/12
6. Use Theorem 8 of Chapter 6 to show that the F-test(7) is a-admissible against 0' : 1/1 1/1, for any 1/1, > O.
5. Best average power. (i) Consider the general linear hypothesis H in the canonical form given by (2) and (3) of Section 1, and for any 'IJr+ I' .. . , 'IJs'a, and p let S = S('IJr+ I" . . , 'IJs' a; p) denote the sphere {('Ill" ' " 'IJr): E~_I'lJUa2 = p2}. If P.p ('Ill' .. . , 'IJs'a) denotes the
4. (i) The noncentral X2 and F distributions have strictly monotone likelihood ratio . (ii) Under the assumptions of Section 1, the hypothesis H' : 0/2 s o/~ (% > 0 given) remains invariant under the transformations G; (i = 1,2,3) that were used to reduce H : 0/ = 0, and there exists a UMP
3. Noncentral F- and beta-distributionI Let Y" . .. , Y,.; Y.+ I' .. ·' Yn be independently normally distributed with common variance (12 and means £(Y;) = 'IJ;(i = 1, ... , r); £(Y;) = 0 (i = s + 1,.. . , n). (i) The probability density of W = E~_, y;2IL7-s+1y;2 is given by (6). The
2. Noncentral X2-distribution· . (i) If X is distributed as N(!/t, 1), the probability density of V = X2 is pf( v) =. f.f'-OPk(!/t )fu+ 1(v), where Pk(!/t) = (!/t2/2)ke-(lj2),y2/k! and where fu + 1 is the probability density ofax2-variable with 2k + 1 degrees of freedom. (ii) Let Y•• . . . ,
1. Expected sums of squares. The expected values of the numerator and denominator of the statistic W· defined by (7) are ( r Y2) 1 r [n Y2] E L -' = ,,2+ - L r17 and E L -=- = ,,2. ;=1 r r i - I i - s+l n s
81. Under the assumptions of Problem 79, suppose that a family of confidence sets S( x) is equivariant under G*. Then there exists a set B in the range space of the pivotal V such that (70) holds. In this sense, all equivariant confidence sets can be obtained from pivotals. [Let A be the subset of
80. Under the assumptions of the preceding problem, the confidence set S( x) is equivariant under G*.
79. (i) If Gis transitive over flEx w and V(X, 8) is maximal invariant under G, then V( X, 0) is pivotal. (ii) By (i), any quantity W( X, 8) which is invariant under Gis pivotal; give an example showing that the converse need not be true.
78. Let V( X, 8) be any pivotal quantity [i.e, have a fixed probability distribution independent of (8, ~)], and let B be any set in the range space of V with probability P(V E B) = 1 - Q . Then the sets S(x) defined by (70) 8 E S( x) if and only if V( 8, x) E B are confidence sets for 8 with
77. (i) Let XI' . . . , Xm ; YI , . . . , Y,. be LLd. according to a continuous distribution F, let the ranks of the Y's be SI < . .. < Sn' and let T = h(SI) + .. . +h(SII)' Then if either m = n or h(s) + h(N + 1 - s) is independent of s, the distribution of T is symmetric about nE~_lh(i)/N(ii)
76. The Kolmogorov test (56) for testing H : F = Fo (Fo continuous) is consistent against any alternative FI Fo, that is, its power against any fixed FI tends to 1 as n -> 00. [The critical value /). = /).n of (56) corresponding to a given a satisfiesj;/). -> K for some K > 0 as n -> 00 . Let a be
75. The totality of permutations of K distinct numbers al"' " a K for varying a., • . . , aK can be represented as a subset CK of Euclidean K-space RK, and the group G of Example 8 as the union of C2 , C3 , • •• • Let I' be the measure over G which assigns to a subset B of G the value
74. Let XI"'" x" be a sample from Na, a2 ) , and consider the UMP invariant level-a test of H: ~/a :=; 80 (Section 6.4). Let an(F) be the actual significance level of this test when XI" ' " X; is a sample from a distribution F with E(X,) = t Var(X,) = a2 < 00 . Then the relation an(F) -> a will not
73. The following UMP unbiased tests of Chapter 5 are also UMP invariant under change in scale: (i) The test of g :=; go in a gamma distribution (Problem 73 of Chapter 5). (ii) The test of bl :=; b2 in Problem 75(i) of Chapter 5.
72. The UMP invariant test of Problem 69 is also UMP similar. [Consider the problem of testing a = 0 vs. a > 0 in the two-parameter exponential family with density ( a I-a) C(a,T)exp --2LX; - --Llx;! , 2T T O:s;a
71. Show that the test of Problem 5(i) reduces to (i) [x( nl - x(1)l/S < c for normal vs, uniform; (ii) [x - xod/S < c for normal vs. exponential; (iii) [x - x o d / [x(n) - xod < c for uniform vs. exponential. (Uthoff,1970 .) Note . When testing for normality, one is typically not interested in
70. Uniform us. triangular. (i) For lo( x) = 1 (0 < x < I), II(x) = 2x (0 < x < I) , the test of Problem 68 reduces to rejecting when T = x(nJ!:x < C. (ii) Under 10' the statistic 2n log T is distributed as X~ (Quesenberry and Starbuck, 1976.)
69. Normal us. double exponential. For lo(x) =e- x 2 / 2 / & , II(x) = e-1xl/ 2, the test of the preceding problem reduces to rejecting when /r.x;/r.lxil
68. Let Xl" '" x" be a sample from a distribution with density 1 (Xl) (XII) -;;;f ..·f ,where I( x) is either zero for x < 0 or symmetric about zero. The most powerful scale-invariant test for testing H: 1=10 against K :I = II rejects when LX! un-III (ux I) .. . II (ux n) du o >c. !oooun -I/o(uxl
67. Consider the problem of obtaining a (two-sided) confidence band for an unknown continuous cumulative distribution function F. (i) Show that this problem is invariant both under strictly increasing and strictly decreasing continuous transformations X[ = f( X;), i = 1, .. . , n, and determine a
66. If the confidence sets S(x) are equivariant under the group G, then the probability P8 {8 E S( X)} of their covering the true value is invariant under the induced group G.
65. Let Xi; (j = 1, .. . , n;; i = 1, ... , s) be samples from the exponential distribution E(~; 0). Determine the smallest equivariant confidence sets for (~l .. . , t) with respect to the group X[j = bX;j + ai .
64. Let Xl " ' " x" be a sample from the exponential distribution E( ~, 0). With respect to the transformations X[ = bX; + a determine the smallest equivariant confidence sets (i) for 0, both when size is defined by Lebesgue measure and by the equivariant measure (39); (ii) for~
63. Solve the problem corresponding to Example 20 when (i) Xl" ' " X" is a sample from the exponential density Ea,o), and the parameter being estimated is 0; (ii) Xl " '" X" is a sample from the uniform density U(~,~ + T), and the parameter being estimated is T.
62. Generalize the confidence sets of Example 18 to the case that the X; are Na;, d;( 2 ) where the d's are known constants.
61. Let Xl •. . .• Xm ; Yl ,.. . , y" be independently normally distributed as N( t ( 2 ) and N( T/, ( 2 ) respectively. Determine the equivariant confidence sets for T/ - ( that have smallest Lebesgue measure when (i) a is known; (ii) a is unknown
60. In Example 23. show that (i) both sets (55) are intervals; (ii) the sets given by vp( v) > C coincide with the intervals (42) of Chapter 5.
59. The confidence sets (47) are uniformly most accurate equivariant under the group G defined at the end of Example 22.
58. Let Xl •. . .• X, be independent N(O,l), and let S2 be inds>endent of the X's and distributed as X;. Then the distribution of (XI/Sill, . . . • Xr/S..;;) is a central multivariate t-distribution, and its density is _ r(HII + r)) ( 1 2) -~(v+r) p(v1 . · ··.vr ) - r/2 l+-Ev; ('lTII)
57. Show that in Example 20, (i) the confidence sets a2 / S2 E AU with A** given by (40) coincide with the uniformly most accurate unbiased confidence sets for a2 ; (ii) if (a,b) is best with respect to (39) fora, then (a r , br ) is best for o' (r> 0).
56. In Example 20, the density p(v) of V = 1/S2 is unimodal.
55. In Examples 20 and 21 there do not exist equivariant sets that uniformly minimize the probability of covering false values.
54. (i) Let (Xl ' Yl ) , ... , (Xn , y") be a sample from a bivariate normal distribution, and let -I ( E(X; - x)( Y; - Y) ) p=C , - VE(X; - X)2E(Y; _ y)2 where C(p) is determined such that ( [(X;-X)(y;-y) ) Pp ~C{p) /[(X; - X)2[(y; _ y)2 = 1-a. 353 Then p is a lower confidence limit for the
53. (i) Let XI' .. " X; be independently distributed as N(t 0 2), and let 8 = ~jo The lower confidence bounds for 8, which at confidence level 1 - a are uniformly most accurate invariant under the transformations Xi = aX;, are 8 = CI( .[nX ) - VE(X;-X)2j ( n - l ) where the function C(8) is
52. Counterexample. The following example shows that the equivariance of S(x) assumed in the paragraph following Lemma 5 does not follow from the other assumptions of this lemma. In Example 8, let n = 1, let G(1) be the group G of Example 8, and let G(2) be the corresponding group when the roles of
51. (i) One-sided equivariant confidence limits. Let () be real-valued, and suppose that for each ()o, the problem of testing () 5 ()o against () > ()o (in the presence of nuisance parameters {;) remains invariant under a group "For further material on these statistics see Kendall (1970); Aiyar,
50. Let Xl" ' " XII ; Yl , .. . ,y" be samples from N(~, 11 2) and N( 71 , T 2) respectively. Then the confidence intervals (43) of Chapter 5 for T 2 /11 2 , which can be written as [(}j - yf k[( X; - X)2 T 2 5 2511 k[(}j _ y)2 [(X; - xf ' are uniformly most accurate equivariant with respect to
49. In Example 16, a family of sets S(x, y) is a class of equivariant confidence sets if and only if there exists a set 9t of real numbers such that S(x,y) = U {(~,7I) :(x-n2+(Y-7I)2=r2} re9t
48. The hypothesis of independence. Let (XI' YI), . . . ,(XN , YN ) be a sample from a bivariate distribution, and (~1)' ZI)' ... , ( N)' ZN) be the same sample arranged according to increasing values of the X's, so that the Z's are a permutation of the Y's. Let R, be the rank of X; among the X's,
47. In the preceding problem let U;j = 1 if (j - i)(Zj - Zi) > 0, and = 0 otherwise. (i) The test statistic LiT; can be expressed in terms of the U's through the relation N LiT;= LU-i)U + N(N+1)(N+2) I-I IC as another rejection region for the preceding problem. [(i): Let = 1 or 0 as z,s Zj or z, >
46. The hypothesis of randomness. Let ZI" ' " ZN be independently distributed with distributions FI , . .. , FN , and let T; denote the rank of Z, among the Z 's. For testing the hypothesis of randomness FI = . .. = FN against the alternatives K of an upward trend, namely that Z, is stochastically
45. Unbiased tests of symmetry. Let Z\ , . . . , ZN be a sample, and let cf> be any rank test of the hypothesis of symmetry with respect to the origin such that Zi z; for all i implies cf>(Zt , . . . , ZN) cf>(zi , . . . , Ztv). Then cf> is unbiased against the one-sided alternatives that the Z's
44. An alternative expression for (66) is obtained if the distribution of Z is characterized by (p, F, G). If then G = h(F) and h is differentiable, the distribution of n and the S, is given by ( 67) where {{I) < pm(1 - pr E[ h'( {{ SI») ... h'( {{ sn»)] ' < {{N) is an ordered sample from U(O,I).
43. Let Z Z be a sample from a distribution with density f(z - ), where (z) is positive for all z and f is symmetric about 0, and let m, n, and the S, be defined as in the preceding problem. (i) The distribution of n and the S, is given by (66) P(the number of positive Z's is n and S...... = ... S
42. (i) Let m and n be the numbers of negative and positive observations among Z,..., Z, and let S < < S, denote the ranks of the positive Z's among 1ZZ. Consider the N + N(N-1) distinct sums Z + Z, with = as well as ij. The Wilcoxon signed rank statistic S, is equal to the number of these sums
41. Continuation. (i) There exists at every significance level a a test of H: G = F which has power >a against all continuous alternatives (F,G) with F + G. (ii) There does not exist a nonrandomized unbiased rank test of H against all G+ F at level a-1/(m +"). [(i): let X,, X;; Y,, Y; (i-1,..., n)
40. (i) Let X, X' and Y, Y' be independent samples of size 2 from continuous distributions F and G respectively. Then p = P{max(X, X') < min(Y, Y')) + P(max(Y, Y') < min( X, X')} = + 24, where A (FG) d[(F+ G)/2]. (ii) A=0 if and only if F = G.[(i): p (1F) dG + (1 - G) dF2, which after some
39. - (i) If X X and Y,..., Y, are samples from F(x) and G(y) = F(y A) respectively (F continuous), and D(1)
38. Let X X Y,..., Y be samples from a common continuous distribution F. Then the Wilcoxon statistic U defined in Problem 27 is distributed symmet- rically about mn even when m n.
37. Let be a family of probability measures over (, ), and let be a class of transformations of the space . Define a class of distributions by Fif there exists F, and fe such that the distribution of f(x) is F, when that of X is Fo. If o is any test satisfying (a) E(X) = a for all F, F, and (b) (x)
36. An alternative proof of the optimum property of the Wilcoxon test for detecting a shift in the logistic distribution is obtained from the preceding problem by equating F(x-6) with (10) F(x) + 0F2(x), neglecting powers of higher than the first. This leads to the differential equation F-0F= (1-0)
35. For sufficiently small > 0, the Wilcoxon test at level a = k(~). k a positive integer, maximizes the power (among rank tests) against the alternatives (F,G) with G (1-0)F+OF.
34. (i) If X, X and Y, Y, are samples with continuous cumulative distribution functions F and G h(F) respectively, and if h is differen- tiable, the distribution of the ranks S < ... < .
33. Distribution of order statistics. (i) If Z., Z is a sample from a cumulative distribution function F with densityf, the joint density of Y, Z,), i = 1,..., n, is (62) N!f(y)... f(y) (1 1)(551)!...(N-s)! x[F()][F(2) = F()]..[1 F(y)] N- - for y < < Y- (ii) For the particular case that the Z's are
32. Under the assumptions of the preceding problem, if F, h,(F), the distribu- tion of the ranks T... Ty of Z. Zy depends only on the h,, not on F. If the h, are differentiable, the distribution of the T, is given by (61) = P{TTIN} = E[(U)(U)] N! where U
31. Let Z, have a continuous cumulative distribution function F, (i = 1,..., N), and let G be the group of all transformations Z-f(Z) such that fis continuous and strictly increasing. (i) The transformation induced by f in the space of distributions is F = F,(). (ii) Two N-tuples of distributions
30. (i) For any continuous cumulative distribution function F, define F-(0) = -oo, F(y) inf(x: F(x)=y) for 0 < y < 1, F(1) = co if F(x)
29. (i) Let Z. Zy be independently distributed with densities f.....N. and let the rank of Z, be denoted by T. If f is any probability density which is positive whenever at least one of thef, is positive, then 1 (60) = P{TTNIN} = = = N! [ 11 (Vn) F(V) FN (VIN) f(VIN)) where V,
28. Expectation and variance of Wilcoxonstatistic. If the X 's and Y's are samples from continuous distributions F and G respectively, the expectation and variance of the Wilcoxon statistic V defined in the preceding problem are given by (57) E( ~) = P{ X < Y} = f FdG and (58) mnvar(~) =
27. Wilcoxon two-sample test. Let Vi) = 1 or 0 as X; < lj or X; > lj, and let V = EEU,1 be the number of pairs X;. lj with X; < lj. (i) Then V = ES, - tn(n + 1), where SI < . . . < S; are the ranks of the y's, so that the test with rejection region V> C is equivalent to the Wilcoxon test. (ii) Any
26. For the model of the preceding problem, generalize Example 13 (continued) to show that the two-sided t-test is a Bayes solution for an appropriate prior distribution.
25. Let X X Y..... Y be independent N(E, 02) and N(n, 2) respec- tively. The one-sided t-test of H: 8/0 0 is admissible against the alternatives (i) 0 < 8 0; (ii) 8 > 8 for any 82 > 0.
24. Verify (i) the admissibility of the rejection region (22); (ii) the expression for I(Z) given in the proof of Lemma 3.
23. (i) In Example 13 (continued) show that there exist C, C, such that A,(7) and A(n) are probability densities (with respect to Lebesgue measure). (ii) Verify the densities ho and h.
22. (i) The acceptance region T/T C of Example 13 is a convex set in the (T, T) plane. (ii) In Example 13, the conditions of Theorem 8 are not satisfied for the sets A: T/TC and ': > k.
21. (i) The following example shows that a-admissibility does not always imply d-admissibility. Let X be distributed as U(O, 8), and consider the tests !PI and !P2 which reject when respectively X < 1 and X < t for testing H : 8 = 2 against K : 8 = 1. Then for a = ~, !PI and !P2 are both
20. The definition of d-admissibility of a test coincides with the admissibility definition given in Chapter 1, Section 8 when applied to a two-decision procedure with loss 0 or 1 as the decision taken is correct or false
19. Let G be a group of transformations of !!E, and let SiI be a a-field of subsets of !!E, and !J. a measure over (!!E, SiI). Then a set A E SiI is said to be almost invariant if its indicator function is almost invariant. (i) The totality of almost invariant sets forms a a-field Silo, and a
18. Inadmissible likelihood-ratio test. In many applications in which a UMP invariant test exists, it coincides with the likelihood-ratio test. That this is,
17. Invariance of likelihoodratio. Let the family of distributions 9' = {Po, 8 E Q} be dominated by p., let Po = dPo/dp., let p.g-I be the measure defined by p.g-l(A) = p.[g-I(A)]. and suppose that p, is absolutely continuous with respect to p.g-Ifor all g E G(i) Then dp. Po(x) = Pgo(gx)-d- I (gx)
16. (i) A generalization of equation (1) is !f(x) dPo(x) = f f(g-Ix} dPgo(X) . A gA (ii) If Po, is absolutely continuous with respect to POo' then PgOI is absolutely continuous with respect to PgOOand dPo dPgo - ' (x) = -'(gx) dPoo dPgoO (a.e. poJ. (a.e. p,). (iii) The distribution of dPo/dPoo( X)
15. Envelope power function. Let S(a) be the class of all level-a tests of a hypothesis H, and let /3:(8) be the envelopepower function, defined by /3:(8) = sup /3(8), ES(a) where fJ denotes the power function of If the problem of testing H is invariant under a group G, then fJ:(8) is invariant
14. Consider a testing problem which is invariant under a group G of transformations of the sample space, and let 't' be a class of tests which is closed under G, so that E 't' implies g E 't', where g is the test defined by g( x) = ~(gx) If there exists an a.e. unique UMP member ~o of re, then ~o
13. Show that (i) GI of Example 11 is a group ; (ii) the test which rejects when xiiiXfl > C is UMP invariant under G1; (iii) the smallest group containing G, and G2 is the group G of Example 11.
12. Almost invariance of a test If> with respect to the group G of either Problem 6(i) or Example 6 implies that If> is equivalent to an invariant test.
11. For testing the hypothesis that the correlation coefficient p of a bivariate normal distribution is s Po. determine the power against the alternative p = PI when the level of significance a is .05, Po = .3. PI = .5. and the sample size n is 50,100,200.
10. Testing a correlation coefficient. Let (Xl' yl)•... '(Xn , y") be a sample from a bivariate normal distribution. (i) For testing P S Po against P > Po there exists a UMP invariant test with respect to the group of all transformations X[ = aX; +b, Y;' = CY; + d for whicha. C > O. This test
9. Two-sided t-test. (i) Let Xl" ' " X" be a sample from N( t 0 2). For testing = 0 against =1' 0, there exists a UMP invariant test with respect to the group XI = eX" c =1' O. given by the two-sided r-test (17) of Chapter 5. (ii) Let X1 . . .. ,Xm and YI , . .. ,y" be samples from N(t02) and
8. (i) When testing H : P 5 Po against K: P > Po by means of the test corresponding to (11), determine the sample size required to obtain power fJ against P = PI' a = .05, fJ = .9 for the cases Po = .1, PI = .15, .20,.25; Po = .05. PI = .10, .15•.20•.25; Po = .01. PI = .02•.05,.10, .15, .20.
7. If X. ,... • x" and YI,. .. ,y" are samples from Na.02) and N(1/,-r 2 ) respectively, the problem of testing -r 2 = 0 2 against the two-sided alternatives -r 2 =1' 0 2 remains invariant under the group G generated by the transformations x,' = aX, +b. Y;' = aY; +c, a =1' 0, and XI = Y; , Y;' =
6. Let Xl"' " Xm; Yl , . .. , Y" be samples from exponential distributions with densities a-Ie-(. there exists a UMP invariant test with respect to the group G: X: = aX;+b, J}' = aJ} +c, a> 0, - 0
5. (i) Let X = (Xl' . . . , Xn) have probability density (1/0 n)f[(XI - ~)/O" ",(xn - ~)/O] where -00 < < 00 , 0
4. Let X, Y have the joint probability density [i x, y). Then the integral h(z) = J~ "J (y - z, y) dy is finite for almost all z, and is the probability density of Z= Y- X. [Since P{Z5.b}=J'!..ooh(z)dz, it is finite and hence h is finite almost everywhere.]
Showing 2100 - 2200
of 4976
First
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Last
Step by Step Answers