New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
statistical techniques in business
Testing Statistical Hypotheses 3rd Edition Erich L. Lehmann, Joseph P. Romano - Solutions
In the model (7.39), the correlation coefficient ρ between two observations Xij , Xik belonging to the same class, the so-called intraclass correlation coefficient, is given by ρ = σ2 A/(σ2 A + σ2).Section 7.8
(i) The test (7.41) of H : ∆ ≤ ∆0 is UMP unbiased.(ii) Determine the UMP unbiased test of H :∆=∆0 and the associated uniformly most accurate unbiased confidence sets for ∆.
Let X1,...,Xn be independently normally distributed with common variance σ2 and means ξi = α + βti + γt2 i , where the ti are known. If the coefficient vectors (t k 1 ,...,tk n), k = 0, 1, 2, are linearly independent, the parameter space ΠΩ has dimension s = 3, and the least-squares
Let X1,...,Xm; Y1,...,Yn be independently normally distributed with common variance σ2 and means E(Xi) = α + β(ui − u¯), E(Yj ) =γ + δ(vj − v¯), where the u’s and v’s are known numbers. Determine the UMP invariant tests of the linear hypotheses H : β = δ and H : α = γ, β = δ.
In a regression situation, suppose that the observed values Xj and Yj of the independent and dependent variable differ from certain true values Xj and Y j by errors Uj , Vj which are independently normally distributed with zero means and variances σ2 U and σ2 V . The true values are assumed to
In the three-factor situation of the preceding problem, suppose that a = b = m. The hypothesis H can then be tested on the basis of m2 observations as follows. At each pair of levels (i, j) of the first two factors one observation is taken, to which we refer as being in the ith row and the jth
Let Xijk (i = 1,...,a; j = 1,...,b; k = 1,...,m) be independently normally distributed with common variance σ2 and mean E(Xijk) = µ + αi + βj + γkαi = βj = γk = 0.Determine the linear hypothesis test for testing H : αi = ...αa = 0.
with Lemma 3.4.2.]
Let Xλ denote a random variable distributed as noncentralχ2 with f degrees of freedom and noncentrality parameter λ2. Then Xλ is stochastically larger than Xλ if λ 0, P{|Y + λ| ≤ z} ≤ P{|Y + λ| ≤ z}, is an immediate consequence of the shape of the normal density function. An
In the two-way layout of Section 7.5 with a = b = 2, denote the first three terms in the partition of (Xijk − Xij·)2 by S2 A, S2 B, and S2 AB, corresponding to the A, B, and AB effects (i.e. the α’s, β’s, and γ’s), and denote by HA, HB, and HAB the hypotheses of these effects being
The linear-hypothesis test of the hypothesis of no interaction in a two-way layout with m observations per cell is given by (7.28).
Let Z1,...,Zs be independently distributed as N(ζi, a2 i ), i =1,...,s, where the ai are known constants.(i) With respect to a suitable group of linear transformations there exists a UMP invariant test of H : ζ1 = ··· = ζs given by the rejection region 1 a2 i Zi −Zj/a2 j 1/a2 j 2 =
If the variables Xij (j = 1,...,ni;i = 1,...,s) are independently distributed as N(µi, σ2), then E#ni (Xi· − X··)2$= (s − 1)σ2 +ni (µi − µ·)2 , E#(Xij − Xi·)2$= (n − s)σ2.
Let X1,...,Xn be independently normally distributed with known variance σ2 0 and means E(Xi) = ξi, and consider any linear hypothesis with s ≤ n (instead of s Cσ2 0 (7.65)with C determined by ∞Cχ2 r(y) dy = α. (7.66)Section 7.3
Let Xij (j = 1,...,mi) and Yik (k = 1,...,ni) be independently normally distributed with common variance σ2 and means E(Xij ) = ξi and E(Yij ) = ξi + ∆. Then the UMP invariant test of H : ∆ = 0 is given by (7.63)with θ = ∆, θ0 = 0 andˆθ =i mini Ni (Yi· − Xi·)i mini Ni, ˆξi =mi
Under the assumptions of Section 7.1 suppose that the means ξi are given byξi = s j=1 aijβj , where the constants aij are known and the matrix A = (aij ) has full rank, and where the βj are unknown parameters. Let θ = s j=1 ejβj be a given linear combination of the βj .(i) If βˆj denotes
Given any ψ2 > 0, apply Theorem 6.7.2 and Lemma 6.7.1 to obtain the F-test (7.7) as a Bayes test against a set Ω of alternatives contained in the set 0 < ψ ≤ ψ2.Section 7.2
Use Theorem 6.7.1 to show that the F-test (7.7) is α-admissible against Ω : ψ ≥ ψ1 for any ψ1 > 0.
Best average power.(i) Consider the general linear hypothesis H in the canonical form given by(7.2) and (7.3) of Section 7.1, and for any ηr+1,...,ηs, σ, and ρ let S =S(ηr+1,...,ηs, σ : ρ) denote the sphere {(η1,...,ηr) : r i=1 η2 i /σ2 = ρ2}.If βφ(η1,...,ηr, σ) denotes the power
(i) The noncentral χ2 and F distributions have strictly monotone likelihood ratio.(ii) Under the assumptions of Section 7.1, the hypothesis H : ψ2 ≤ ψ2 0 (ψ0 > 0 given) remains invariant under the transformations Gi(i = 1, 2, 3) that were used to reduce H : ψ = 0, and there exists a UMP
Noncentral F- and beta-distribution.12 Let Y1,...,Yr; Ys+1,...,Yn be independently normally distributed with common variance σ2 and means E(Yi) = ηi (i = 1,...,r); E(Yi)=0(i = s + 1,...,n).(i) The probability density of W = r i=1 Y 2 i /n i=s+1 Y 2 i is given by (7.6). The distribution of the
Noncentral χ2-distribution.11(i) If X is distributed as N(ψ, 1), the probability density of V = X2 is P Vψ (v) =∞k−0 Pk(ψ)f2k+1(v), where Pk(ψ)=(ψ2/2)ke−(1/2)ψ2/k! and where f2k+1 is the probability density of a χ2-variable with 2k + 1 degrees of freedom.(ii) Let Y1,...,Yr be
Expected sums of squares. The expected values of the numerator and denominator of the statistic W∗ defined by (7.7) are Er i=1 Y 2 ir= σ2 +1 rr i=1η2 i and E! n i=s+1 Y 2 in − s"= σ2.
Consider the problem of obtaining a (two-sided) confidence band for an unknown continuous cumulative distribution function F.(i) Show that this problem is invariant both under strictly increasing and strictly decreasing continuous transformations Xi = f(Xi), i = 1,...,n, and determine a maximal
If the confidence sets S(x) are equivariant under the group G, then the probability Pθ{θ ∈ S(X)} of their covering the true value is invariant under the induced group G¯.
Let Xij (j = 1,...,ni; i = 1,...,s) be samples from the exponential distribution E(ξi, σ). Determine the smallest equivariant confidence sets for (ξ1,...,ξr) with respect to the group Xij = bXij + ai.
Let X1,...,Xn be a sample from the exponential distribution E(ξ, σ). With respect to the transformations Xi = bXi+a determine the smallest equivariant confidence sets(i) for σ, both when size is defined by Lebesgue measure and by the equivariant measure (6.41);(ii) for ξ.
Solve the problem corresponding to Example 6.12.1 when(i) X1,...,Xn is a sample from the exponential density E(ξ, σ), and the parameter being estimated is σ;(ii) X1,...,Xn is a sample from the uniform density U(ξ, ξ + τ ), and the parameter being estimated is τ .
Generalize the confidence sets of Example 6.11.3 to the case that the Xi are N(ξi, diσ2) where the d’s are known constants.
Let X1,...,Xm; Y1,...,Yn be independently normally distributed as N(ξ, σ2) and N(η, σ2) respectively. Determine the equivariant confidence sets for η − ξ that have smallest Lebesgue measure when(i) σ is known;(ii) σ is unknown.
In Example 6.12.4, show that(i) both sets (6.57) are intervals;(ii) the sets given by vp(v) > C coincide with the intervals (5.41).
The confidence sets (6.49) are uniformly most accurate equivariant under the group G defined at the end of Example 6.12.3.
Let X1,...,Xr be i.i.d. N(0, 1), and let S2 be independent of the X’s and distributed as χ2ν. Then the distribution of (X1/S√ν,...,Xr/S√ν)is a central multivariate t-distribution, and its density is p(v1,...,vr) = Γ( 1 2 (ν + r))(πν)r/2Γ(ν/2)1 + 1νv2 i− 1 2 (ν+r).
Show that in Example 6.12.1,(i) the confidence sets σ2/S2 ∈ A∗∗ with A∗∗ given by (6.42) coincide with the uniformly most accurate unbiased confidence sets for σ2;(ii) if (a,b) is best with respect to (6.41) for σ, then (ar, br) is best for σr (r > 0).
In Example 6.12.1, the density p(v) of V = 1/S2 is unimodal.
In Examples 6.12.1 and 6.12.2 there do not exist equivariant sets that uniformly minimize the probability of covering false values.
provides a simple check of the equivariance of confidence sets. In Example 6.12.2, for instance, the confidence sets (6.43) are based on the pivotal vector (X1 − ξ1,...,Xr − ξr), and hence are equivariant.Section 6.12
Under the assumptions of Problem 6.70, suppose that a family of confidence sets S(x) is equivariant under G∗. Then there exists a set B in the range space of the pivotal V such that (6.72) holds. In this sense, all equivariant confidence sets can be obtained from pivotals.[Let A be the subset of
Under the assumptions of the preceding problem, the confidence set S(x) is equivariant under G∗.
(i) If G˜ is transitive over X × w and V (X, θ) is maximal invariant under G˜, then V (X, θ) is pivotal.(ii) By (i), any quantity W(X, θ) which is invariant under G˜ is pivotal; give an example showing that the converse need not be true.
Let V (X, θ) be any pivotal quantity [i.e. have a fixed probability distribution independent of (θ, ϑ)], and let B be any set in the range space of V with probability P(V ∈ B)=1 − α. Then the sets S(x) defined byθ ∈ S(x) if and only if V (θ, x) ∈ B (6.72)are confidence sets for θ
(i) Let (X1, Y1),..., (Xn, Yn) be a sample from a bivariate normal distribution, and letρ = C−1 (Xi − X¯)(Yi − Y¯ )(Xi − X¯)2 (Yi − Y¯ )2, where C(ρ) is determined such that Pθ/ (Xi − X¯)(Yi − Y¯ )(Xi − X¯)2 (Yi − Y¯ )2 ≤ C(ρ)0= 1 − α.Then ρ is a lower
(i) Let X1,...,Xn be independently distributed as N(ξ, σ2), and let θ = ξ/σ. The lower confidence bounds θ for θ, which at confidence level 1−α are uniformly most accurate invariant under the transformations Xi = aXi, areθ = C−1 √nX¯ (Xi − X¯)2/(n − 1)where the function
Counterexample. The following example shows that the equivariance of S(x) assumed in the paragraph following Lemma 6.11.1 does not follow from the other assumptions of this lemma. In Example 6.5.1, let n = 1, let G(1)be the group G of Example 6.5.1, and let G(2) be the corresponding group when the
(i) One-sided equivariant confidence limits. Let θ be realvalued, and suppose that, for each θ0, the problem of testing θ ≤ θ0 againstθ>θ0 (in the presence of nuisance parameters ϑ) remains invariant under a group Gθ0 and that A(θ0) is a UMP invariant acceptance region for this
Let X1,...,Xn; Y1,...,Yn be samples from N(ξ, σ2) and N(η, τ 2) respectively. Then the confidence intervals (5.42) for τ 2/σ2, which can be written as(Yj − Y¯ )2 k(Xi − X¯)2 ≤ τ 2σ2 ≤ k(Yj − Y¯ )2(Xi − X¯)2 , are uniformly most accurate equivariant with respect to the
In Example 6.11.1, a family of sets S(x, y) is a class of equivariant confidence sets if and only if there exists a set R of real numbers such that S(x, y) = r∈R{(ξ, η):(x − ξ)2 + (y − η)2 = r2}.
The hypothesis of independence. Let (X1, Y1),..., (XN , YN ) be a sample from a bivariate distribution, and (X(1), Z1),..., (X(N), ZN ) be the same sample arranged according to increasing values of the X’s so that the Z’s are a permutation of the Y ’s. Let Ri be the rank of Xi among the
In the preceding problem let Uij = 1 if (j − i)(Zj − Zi) > 0, and= 0 otherwise.(i) The test statistic iTi, can be expressed in terms of the U’s through the relationN i=1 iTi = i Zj . Then Tj = N i=1 Vij , and Vij = Uij or 1−Uij as i
with C the class of transformations z1 = z1, zi = fi(zi)for i > 1, where z
The hypothesis of randomness.7 Let Z1,...,ZN be independently distributed with distributions F1,...,FN , and let Ti denote the rank of Zi among the Z’s For testing the hypothesis of randomness F1 = ··· = FN against the alternatives K of an upward trend, namely that Zi is stochastically
Unbiased tests of symmetry. Let Z1,...,ZN , be a sample, and let φ be any rank test of the hypothesis of symmetry with respect to the origin such that zi ≤ zi for all i implies φ(z1,...,zN ) ≤ φ(z1,...,zN). Then φ is unbiased against the one-sided alternatives that the Z’s are
An alternative expression for (6.68) is obtained if the distribution of Z is characterized by (ρ, F, G). If then G = h(F) and h is differentiable, the distribution of n and the Sj is given byρm(1 − ρ)nE h(U(s1)) ··· h(U(sn)), (6.69)where U(1), < ··· < U(N) is an ordered sample from
Let Z1,...,ZN be a sample from a distribution with density f(z − θ), where f(z) is positive for all z and f is symmetric about 0, and let m, n, and the Sj be defined as in the preceding problem.(i) The distribution of n and the Sj is given by P{the number of positive Z’s is n and S1 =
(i) Let m and n be the numbers of negative and positive observations among Z1,...,ZN , and let S1 < ··· < Sn denote the ranks of the positive Z’s among |Z1|,... |ZN |. Consider the N + 1 2N(N −1) distinct sumsZi+Zj with i = j as well as i = j. The Wilcoxon signed rank statistic Sj , is
(i) Let X1,...,Xm; Y1,...,Yn be i.i.d. according to a continuous distribution F, let the ranks of the Y ’s be S1 < ··· < Sn, and let T = h(S1) + ··· + h(Sn). Then if either m = n or h(s) + h(N + 1 − s) is independent of s, the distribution of T is symmetric about nN i=1 h(i)/N.(ii) Show
Continuation.(i) There exists at every significance level α a test of H : G = F which has power > α against all continuous alternatives (F, G) with F = G.(ii) There does not exist a nonrandomized unbiased rank test of H against all G = F at levelα = 13m + n n.[(i): let Xi, Xi ; Yi, Y i (i =
(i) Let X, X and Y , Y ’ be independent samples of size 2 from continuous distributions F and G respectively. Then p = P{max(X, X) < min(Y,Y )} + P{max(Y,Y ) < min(X, X)}= 1 3 + 2∆, where ∆ =(F − G)2 d[(F + G)/2].(ii) ∆ = 0 if and only if F = G.[(i): p =(1 − F)2 dG2 +(1 − G)2 dF2
calculated for the observations X1,...,Xm; Y1 − ∆,...,Yn − ∆.[An alternative measure of the amount by which G exceeds F (without assuming a location model) is p = P{X
and the probability on the right side is calculated for ∆ = 0.(ii) Determine the above confidence interval for ∆ when m = n = 6, the confidence coefficient is 20 21 , and the observations are x : .113, .212, .249,.522, .709, .788, and y : .221, .433, .724, .913, .917, 1.58.(iii) For the data of
(i) If X1,...,Xm and Y1,...,Yn are samples from F(x) and G(y) = F(y − ∆) respectively (F continuous), and D(1) < ··· < D(mn)denote the ordered differences Yj − Xi, then P D(k) < ∆ < D(mn+1−k)= P0[k ≤ U ≤ mn − k], where U is the statistic defined in
is distributed symmetrically about 1 2mn even when m = n.
Let X1,...,Xm; Y1,...,Yn be samples from a common continuous distribution F. Then the Wilcoxon statistic U defined in
Let F0 be a family of probability measures over (X , A), and let C be a class of transformations of the space X . Define a class F1 of distributions by F1 ∈ F1 if there exists F0 ∈ F0 and f ∈ C such that the distribution of f(X)is F1 when that of X is F0. If φ is any test satisfying (a) EF0
An alternative proof of the optimum property of the Wilcoxon test for detecting a shift in the logistic distribution is obtained from the preceding problem by equating F(x − θ) with (1 − θ)F(x) + θF2(x), neglecting powers of θ higher than the first. This leads to the differential equation F
For sufficiently small θ > 0, the Wilcoxon test at levelα = k 3N n, k a positive integer, maximizes the power (among rank tests) against the alternatives (F, G) with G = (1 − θ)F + θF2.
(i) If X1,...,Xm and Y1,...,Yn are samples with continuous cumulative distribution functions F and G = h(F) respectively, and if h is differentiable, the distribution of the ranks S1 < ... < Sn of the Y ’s is given by P{S1 = s1,...,Sn = sn} = E h U(s1)...h U(sn)m+n m(6.66)where U(1) <
Distribution of order statistics.(i) If Z1,...,ZN is a sample from a cumulative distribution function F with densityf, the joint density of Yi = Z(si), i = 1,...,n, is N!f(y1) ...f(yn)(s1 − 1)!(s2 − s1 − 1)! ... (N − sn)! (6.64)×[F(y1)]s1−1[F(y2) − F(y1)]s2−s1−1 ... [1 −
Under the assumptions of the preceding problem, if Fi = hi(F), the distribution of the ranks T1,...,TN of Z1,...,ZN depends only on the hi, not on F. If the hi are differentiable, the distribution of the Ti is given by P{T1 = t1,...,TN = tn} = E h1U(t1)...hNU(tN )N! , (6.63)where U(1) <
Let Zi have a continuous cumulative distribution function Fi(i = 1,...,N), and let G be the group of all transformations Zi = f(Zi) such that f is continuous and strictly increasing.(i) The transformation induced by f in the space of distributions is Fi =Fi(f −1).(ii) Two N-tuples of
(i) For any continuous cumulative distribution function F, define F −1(0) = −∞, F −1(y) = inf{x : F(x) = y} for 0
(i) Let Z1,...,ZN be independently distributed with densities f1,...,fN , and let the rank of Zi be denoted by Ti. If f is any probability density which is positive whenever at least one of the fi is positive, then P{T1 = t1,...,TN = tn} = 1 N!Ef1V(t1)fV(t1) ··· fNV(tN )fV(tN ).
Expectation and variance of Wilcoxon statistic. If the X’s and Y ’s are samples from continuous distributions F and G respectively, the expectation and variance of the Wilcoxon statistic U defined in the preceding problem are given by EU mn= P{X
Wilcoxon two-sample test. Let Uij = 1 or 0 as Xi < Yj or Xi >Yj , and let U = Uij be the number of pairs Xi, Yj with Xi < Yj .(i) Then U = Si − 1 2n(n + 1), where S1 < ··· < Sn are the ranks of the Y ’s so that the test with rejection region U>C is equivalent to the Wilcoxon test.(ii) Any
Suppose X = (X1,...,Xk)T is multivariate normal with unknown mean vector (θ1,...,θk)T and known nonsingular covariance matrix Σ.Consider testing the null hypothesis θi = 0 for all i against θi = 0 for some i. Let C be any closed convex subset of k-dimensional Euclidean space, and let φ be the
For the model of the preceding problem, generalize Example 6.7.13 (continued) to show that the two-sided t-test is a Bayes solution for an appropriate prior distribution.
Let X1,...,Xm; Y1,...,Yn be independent N(ξ, σ2) and N(η, σ2)respectively. The one-sided t-test of H : δ = ξ/σ ≤ 0 is admissible against the alternatives (i) 0 δ2 for any δ2 > 0.
Verify(i) the admissibility of the rejection region (6.24);(ii) the expression for I(z) given in the proof of Lemma 6.7.1.
(i) In Example 6.7.13 (continued) show that there exist CO, C1 such that λ0(η) and λ1(η) are probability densities (with respect to Lebesgue measure).(ii) Verify the densities h0 and h1.
(i) The acceptance region T1/√T2 ≤ C of Example 6.7.13 is a convex set in the (T1, T2) plane.(ii) In Example 6.7.13, the conditions of Theorem 6.7.1 are not satisfied for the sets A : T1/√T2 ≤ C and Ω : ξ>k.
(i) The following example shows that α-admissibility does not always imply d-admissibility. Let X be distributed as U(0, θ), and consider the tests ϕ1 and ϕ2 which reject when respectively X < 1 and X < 3 2 for testing H : θ = 2 against K : θ = 1. Then for α = 3 4 , ϕ1 and ϕ2 are
The definition of d-admissibility of a test coincides with the admissibility definition given in Section 1.8 when applied to a two-decision procedure with loss 0 or 1 as the decision taken is correct or false.
The following UMP unbiased tests of Chapter 5 are also UMP invariant under change in scale:(i) The test of g ≤ g0 in a gamma distribution (Problem 5.30).(ii) The test of b1 ≤ b2 in Problem 5.18(i).Section 6.7
is also UMP similar.[Consider the problem of testing α = 0 vs. α > 0 in the two-parameter exponential family with density C(α, τ ) exp− α2τ 2x2 i − 1 − ατ|xi|, 0 ≤ α < 1.]Note. For the analogous result for the tests of Problem 6.14, 6.15, see Quesenberry and Starbuck (1976).
The UMP invariant test of
Let G be a group of transformations of X , and let A be a σ-field of subsets of X , and µ a measure over (X , A). Then a set A ∈ A is said to be almost invariant if its indicator function is almost invariant.(i) The totality of almost invariant sets forms a σ-field A0, and a critical function
Inadmissible likelihood-ratio test. In many applications in which a UMP invariant test exists, it coincides with the likelihood-ratio test. That this is, however, not always the case is seen from the following example. Let P1,...,Pn be n equidistant points on the circle x2 + y2 = 4, and Q1,...,Qn
Invariance of likelihood ratio. Let the family of distributions P ={Pθ, θ ∈ Ω} be dominated by µ, let pθ = dPθ/dµ, let µg−1 be the measure defined by µg−1(A) = µ[g−1(A)], and suppose that µ is absolutely continuous with respect to µg−1 for all g ∈ G.(i) Then pθ(x) = pgθ¯
(i) A generalization of equation (6.1) isA f(x) dPθ(x) = gA f(g−1 x) dPgθ¯ (x).(ii) If Pθ1 is absolutely continuous with respect to Pθ0 , then Pgθ¯ 1 is absolutely continuous with respect to Pgθ¯ 0 and dPθ1 dPθ0(x) = dPgθ¯ 1 dPgθ¯ 0(gx) (a.e. Pθ0 ) .(iii) The distribution of
Envelope power function. Let S(α) be the class of all level-α tests of a hypothesis H, and let β∗α(θ) be the envelope power function, defined byβ∗α(θ) = supφ∈S(α)βφ(θ), where βφ denotes the power function of φ. If the problem of testing H is invariant under a group G, then
Consider a testing problem which is invariant under a group G of transformations of the sample space, and let C be a class of tests which is closed under G, so that φ ∈ C implies φg ∈ C, where φg is the test defined byφg(x) = φ(gx). If there exists an a.e. unique UMP member φ0 of C, then
Show that(i) G1 of Example 6.6.11 is a group;(ii) the test which rejects when X2 21/X2 11 > C is UMP invariant under G1;(iii) the smallest group containing G1 and G2 is the group G of Example 6.6.11.
The totality of permutations of K distinct numbers a1,...,aK, for varying a1,...,aK can be represented as a subset CK of Euclidean K-space RK, and the group G of Example 6.5.1 as the union of C2, C3, . . . . Let ν be the measure over G which assigns to a subset B of G the value ∞k=2 µK(B ∩
Almost invariance of a test φ with respect to the group G of either Problem 6.10(i) or Example 6.3.4 implies that φ is equivalent to an invariant test.
For testing the hypothesis that the correlation coefficient ρ of a bivariate normal distribution is ≤ ρ0, determine the power against the alternativeρ = ρ1, when the level of significance α is .05, ρ0 = .3, ρ1 = .5, and the sample size n is 50, 100, 200.Section 6.5
Testing a correlation coefficient. Let (X1, Y1),..., (Xn, Yn) be a sample from a bivariate normal distribution.(i) For testing ρ ≤ ρ0 against ρ>ρ0 there exists a UMP invariant test with respect to the group of all transformations Xi = aXi +b, Y i = cY1 + d for whicha, c > 0. This test
Two-sided t-test.(i) Let X1,...,Xn be a sample from N(ξ, σ2). For testing ξ = 0 against ξ = 0, there exists a UMP invariant test with respect to the group Xi = cXi, c = 0, given by the two-sided t-test (5.17).(ii) Let X1,...,Xm, and Y1,...,Yn be samples from N(ξ, σ2) and N(η,
(i) When testing H : p ≤ p0 against K : p>p0 by means of the test corresponding to (6.13), determine the sample size required to obtain power β against p = p1, α = .05, β = .9 for the cases p0 = .1, p1 = .15, .20, .25; p0 = .05, p1 = .10, .15, .20, .25; p0 = .01, p1 = .02, .05,.10, .15,
Let X1,...,Xn be independent and normally distributed. Suppose Xi has mean µi and variance σ2 (which is the same for all i). Consider testing the null hypothesis that µi = 0 for all i. Using invariance considerations, find a UMP invariant test with respect to a suitable group of transformations
Show that the test of Problem 6.9(i) reduces to(i) [x(n) − x(1)]/S < c for normal vs. uniform;(ii) [¯x − x(1)]/S < c for normal vs. exponential;(iii) [¯x − x(1)]/[x(n) − x(1)] < c for uniform vs. exponential.(Uthoff, 1970.)Note. When testing for normality, one is typically not interested
Showing 1700 - 1800
of 5757
First
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
Last
Step by Step Answers