New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
statistical techniques in business
Testing Statistical Hypotheses Volume I 4th Edition E.L. Lehmann, Joseph P. Romano - Solutions
Solve the problem corresponding to Example 6.12.1 when(i) X1,..., Xn is a sample from the exponential density E(ξ, σ), and the parameter being estimated is σ;(ii) X1,..., Xn is a sample from the uniform density U(ξ, ξ + τ ), and the parameter being estimated is τ .
Generalize the confidence sets of Example 6.11.3 to the case that the Xi are N(ξi, diσ2) where the d’s are known constants.
Let X1,..., Xm; Y1,..., Yn be independently normally distributed as N(ξ, σ2) and N(η, σ2) respectively. Determine the equivariant confidence sets forη − ξ that have smallest Lebesgue measure when(i) σ is known;(ii) σ is unknown.
In Example 6.12.4, show that(i) both sets (6.60) are intervals;(ii) the sets given by vp(v) > C coincide with the intervals (5.41).
The confidence sets (6.52) are uniformly most accurate equivariant under the group G defined at the end of Example 6.12.3.
Let X1,..., Xr be i.i.d. N(0, 1), and let S2 be independent of the X’s and distributed as χ2ν . Then the distribution of (X1/S√ν,..., Xr /S√ν) is a central multivariate t-distribution, and its density is p(v1,...,vr) = ( 1 2 (ν + r))(πν)r/2(ν/2)1 +1νv2 i− 1 2 (ν+r).
Show that in Example 6.12.1,(i) the confidence sets σ2/S2 ∈ A∗∗ with A∗∗ given by (6.45) coincide with the uniformly most accurate unbiased confidence sets for σ2;(ii) if (a,b) is best with respect to (6.44) for σ, then (ar, br) is best for σr (r > 0).
In Example 6.12.1, the density p(v) of V = 1/S2 is unimodal.
In Examples 6.12.1 and 6.12.2 there do not exist equivariant sets that uniformly minimize the probability of covering false values.
provides a simple check of the equivariance of confidence sets. In Example 6.12.2, for instance, the confidence sets (6.46) are based on the pivotal vector (X1 − ξ1,..., Xr − ξr), and hence are equivariant.Section 6.12
Under the assumptions of Problem 6.72, suppose that a family of confidence sets S(x) is equivariant under G∗. Then there exists a set B in the range space of the pivotal V such that (6.75) holds. In this sense, all equivariant confidence sets can be obtained from pivotals.[Let A be the subset of
Under the assumptions of the preceding problem, the confidence set S(x) is equivariant under G∗.
(i) If G˜ is transitive over X × w and V(X, θ) is maximal invariant under G˜ , then V(X, θ) is pivotal.(ii) By (i), any quantity W(X, θ) which is invariant under G˜ is pivotal; give an example showing that the converse need not be true.
Let V(X, θ) be any pivotal quantity [i.e., have a fixed probability distribution independent of (θ, ϑ)], and let B be any set in the range space of V with probability P(V ∈ B) = 1 − α. Then the sets S(x) defined byθ ∈ S(x) if and only if V(θ, x) ∈ B (6.75)are confidence sets for θ
(i) Let (X1, Y1), . . . , (Xn, Yn) be a sample from a bivariate normal distribution, and letρ = C−1⎛⎝(Xi − X¯)(Yi − Y¯)(Xi − X¯)2 (Yi − Y¯)2⎞⎠ , where C(ρ) is determined such that Pθ⎧⎨⎩(Xi − X¯)(Yi − Y¯)(Xi − X¯)2 (Yi − Y¯)2≤ C(ρ)⎫⎬⎭ = 1 −
(i) Let X1,..., Xn be independently distributed as N(ξ, σ2), and letθ = ξ/σ. The lower confidence bounds θ for θ, which at confidence level 1 − αare uniformly most accurate invariant under the transformations X i = a Xi , areθ = C−1⎛⎝√nX¯ (Xi − X¯)2/(n − 1)⎞⎠ , where
Counterexample. The following example shows that the equivariance of S(x) assumed in the paragraph following Lemma 6.11.1 does not follow from the other assumptions of this lemma. In Example 6.5.1, let n = 1, let G(1) be the group G of Example 6.5.1, and let G(2) be the corresponding group when the
(i) One-sided equivariant confidence limits. Let θ be real-valued, and suppose that, for each θ0, the problem of testing θ ≤ θ0 against θ > θ0 (in the presence of nuisance parameters ϑ) remains invariant under a group Gθ0 and that A(θ0) is a UMP invariant acceptance region for this
Let X1,..., Xn; Y1,..., Yn be samples from N(ξ, σ2) and N(η, τ 2), respectively. Then the confidence intervals (5.42) for τ 2/σ2, which can be written as(Yj − Y¯)2 k(Xi − X¯)2 ≤ τ 2σ2 ≤k(Yj − Y¯)2(Xi − X¯)2 , are uniformly most accurate equivariant with respect to the
In Example 6.11.1, a family of sets S(x, y) is a class of equivariant confidence sets if and only if there exists a set R of real numbers such that S(x, y) = +r∈R{(ξ, η) : (x − ξ)2 + (y − η)2 = r 2}.
The hypothesis of independence. Let (X1, Y1), . . . , (X N , YN ) be a sample from a bivariate distribution, and (X(1), Z1), . . . , (X(N), ZN ) be the same sample arranged according to increasing values of the X’s so that the Z’s are a permutation of the Y ’s. Let Ri be the rank of Xi among
In the preceding problem let Ui j = 1 if (j − i)(Z j − Zi) > 0, and= 0 otherwise.(i) The test statistic i Ti , can be expressed in terms of the U’s through the relationN i=1 i Ti = i< j(j − i)Ui j +N(N + 1)(N + 2)6 .(ii) The smallest number of steps [in the sense of Problem 6.42(ii)] by
with C the class of transformations z 1 = z1,z i = fi(zi)for i > 1, where z < f2(z) < ··· < fN (z) and each fi is nondecreasing. If F0 is the class of N-tuples (F1,..., FN ) with F1 =···= FN , then F1 coincides with the class K of alternatives.]
The hypothesis of randomness.7 Let Z1,..., ZN be independently distributed with distributions F1,..., FN , and let Ti denote the rank of Zi among the Z’s. For testing the hypothesis of randomness F1 =···= FN against the alternatives K of an upward trend, namely, that Zi is stochastically
Unbiased tests of symmetry. Let Z1,..., ZN , be a sample, and let φbe any rank test of the hypothesis of symmetry with respect to the origin such that zi ≤ z i for all i implies φ(z1,...,zN ) ≤ φ(z 1,...,z N). Then φ is unbiased against the one-sided alternatives that the Z’s are
An alternative expression for (6.71) is obtained if the distribution of Z is characterized by (ρ, F, G). If then G = h(F) and h is differentiable, the distribution of n and the Sj is given byρm(1 − ρ)nE $h (U(s1))··· h (U(sn ))%, (6.72)where U(1), < ··· < U(N) is an ordered sample from
Let Z1,..., ZN be a sample from a distribution with density f (z −θ), where f (z) is positive for all z and f is symmetric about 0, and let m, n, and the Sj be defined as in the preceding problem.(i) The distribution of n and the Sj is given by P{the number of positive Z’s is n and S1 =
(i) Let m and n be the numbers of negative and positive observations among Z1,..., ZN , and let S1 < ··· < Sn denote the ranks of the positive Z’s among |Z1|,... |ZN |. Consider the N + 1 2 N(N − 1) distinct sums Zi + Z j with i = j as well as i = j. The Wilcoxon signed-rank statistic Sj ,
(i) Let X1,..., Xm; Y1,..., Yn be i.i.d. according to a continuous distribution F, let the ranks of the Y ’s be S1 < ··· < Sn, and let T = h(S1) +···+ h(Sn). Then if either m = n or h(s) + h(N + 1 − s) is independent of s, the distribution of T is symmetric about n Ni=1 h(i)/N.(ii) Show
Continuation.(i) There exists at every significance level α a test of H : G = F which has power> α against all continuous alternatives (F, G) with F = G.(ii) There does not exist a nonrandomized unbiased rank test of H against all G = F at levelα = 1m + n n.[(i): let Xi, X i; Yi, Y i (i =
(i) Let X, X and Y , Y ’ be independent samples of size 2 from continuous distributions F and G, respectively. Then p = P{max(X, X) < min(Y, Y)} + P{max(Y, Y) < min(X, X)}= 1 3 + 2, where = (F − G)2 d[(F + G)/2].(ii) = 0 if and only if F = G.[(i): p = (1 − F)2 dG2 + (1 − G)2 d F2
calculated for the observations X1,..., Xm; Y1 − ,... , Yn − .[An alternative measure of the amount by which G exceeds F (without assuming a location model) is p = P{X < Y }. The literature on confidence intervals for p is reviewed in Mee (1990).]
and the probability on the right side is calculated for = 0.(ii) Determine the above confidence interval for when m = n = 6, the confidence coefficient is 20 21 , and the observations are x : 0.113, 0.212, 0.249, 0.522, 0.709, 0.788, and y : 0.221, 0.433, 0.724, 0.913, 0.917, 1.58.(iii) For the
(i) If X1,..., Xm and Y1,..., Yn are samples from F(x) and G(y) = F(y − ), respectively, (F continuous), and D(1) < ··· < D(mn)denote the ordered differences Yj − Xi , then P $D(k)
Let X1,..., Xm; Y1,..., Yn be samples from a common continuous distribution F. Then the Wilcoxon statistic U defined in
Let F0 be a family of probability measures over (X , A), and let C be a class of transformations of the space X . Define a class F1 of distributions by F1 ∈ F1 if there exists F0 ∈ F0 and f ∈ C such that the distribution of f (X) is F1 when that of X is F0. If φ is any test satisfying (a)
An alternative proof of the optimum property of the Wilcoxon test for detecting a shift in the logistic distribution is obtained from the preceding problem by equating F(x − θ) with (1 − θ)F(x) + θF2(x), neglecting powers of θ higher than the first. This leads to the differential equation F
For sufficiently small θ > 0, the Wilcoxon test at levelα = kN n, k a positive integer, maximizes the power (among rank tests) against the alternatives (F, G) with G =(1 − θ)F + θF2.
(i) If X1,..., Xm and Y1,..., Yn are samples with continuous cumulative distribution functions F and G = h(F) respectively, and if h is differentiable, the distribution of the ranks S1
Distribution of order statistics.(i) If Z1,..., ZN is a sample from a cumulative distribution function F with density f , the joint density of Yi = Z(si), i = 1,..., n, is N! f (y1)... f (yn)(s1 − 1)!(s2 − s1 − 1)! ...(N − sn)! (6.67)×[F(y1)]s1−1[F(y2) − F(y1)]s2−s1−1 ... [1 −
Under the assumptions of the preceding problem, if Fi = hi(F), the distribution of the ranks T1,..., TN of Z1,..., ZN depends only on the hi , not on F. If the hi are differentiable, the distribution of the Ti is given by P{T1 = t1,..., TN = tn} =E $h 1U(t1)... h NU(tN )%N! , (6.66)where U(1) <
Let Zi have a continuous cumulative distribution function Fi (i =1,..., N), and let G be the group of all transformations Z i = f (Zi) such that f is continuous and strictly increasing.(i) The transformation induced by f in the space of distributions is F i = Fi( f −1).(ii) Two N-tuples of
(i) For any continuous cumulative distribution function F, define F−1(0) = −∞, F−1(y) = inf{x : F(x) = y} for 0 < y < 1, F−1(1) = ∞ if F(x) < 1 for all finite x, and otherwise inf{x : F(x) = 1}. Then F[F−1(y)] =y for all 0 ≤ y ≤ 1, but F−1[F(y)] may be < y.(ii) Let Z have a
(i) Let Z1,..., ZN be independently distributed with densities f1,..., fN , and let the rank of Zi be denoted by Ti . If f is any probability density which is positive whenever at least one of the fi is positive, then P{T1 = t1,..., TN = tn} =1 N!E f1 V(t1)f V(t1)··· fN V(tN )f V(tN ),
Expectation and variance of Wilcoxon statistic. If the X’s and Y ’s are samples from continuous distributions F and G, respectively, the expectation and variance of the Wilcoxon statistic U defined in the preceding problem are given by E U mn= P{X < Y } = F dG (6.62)and mnV ar U mn=F dG +
Wilcoxon two-sample test. Let Ui j = 1 or 0 as Xi < Yj or Xi > Yj , and let U = Ui j be the number of pairs Xi , Yj with Xi < Yj .(i) Then U = Si − 1 2 n(n + 1), where S1 < ··· < Sn are the ranks of the Y ’s so that the test with rejection region U > C is equivalent to the Wilcoxon test.(ii)
Suppose X = (X1,..., Xk ) is multivariate normal with unknown mean vector (θ1,..., θk ) and known nonsingular covariance matrix . Consider testing the null hypothesis θi = 0 for all i against θi = 0 for some i. Let C be any closed convex subset of k-dimensional Euclidean space, and let φ be
For the model of the preceding problem, generalize Example 6.7.3(continued) to show that the two-sided t-test is a Bayes solution for an appropriate prior distribution.
Let X1,..., Xm; Y1,..., Yn be independent N(ξ, σ2) and N(η, σ2)respectively. The one-sided t-test of H : δ = ξ/σ ≤ 0 is admissible against the alternatives (i) 0 < δ < δ1 for any δ1 > 0; (ii) δ > δ2 for any δ2 > 0.
Verify(i) the admissibility of the rejection region (6.27);(ii) the expression for I(z) given in the proof of Lemma 6.7.1.
(i) In Example 6.7.4 show that there exist C0, C1 such that λ0(η)and λ1(η) are probability densities (with respect to Lebesgue measure).(ii) Verify the densities h0 and h1.
(i) The acceptance region T1/√T2 ≤ C of Example 6.7.3 is a convex set in the (T1, T2) plane.(ii) In Example 6.7.3, the conditions of Theorem 6.7.1 are not satisfied for the sets A : T1/√T2 ≤ C and : ξ > k.
(i) The following example shows that α-admissibility does not always imply d-admissibility. Let X be distributed as U(0, θ), and consider the tests ϕ1 and ϕ2 which reject when, respectively, X < 1 and X < 3 2 for testing H : θ = 2 against K : θ = 1. Then for α = 3 4 , ϕ1 and ϕ2 are both
The definition of d-admissibility of a test coincides with the admissibility definition given in Section 1.8 when applied to a two-decision procedure with loss 0 or 1 as the decision taken is correct or false.
The following UMP unbiased tests of Chapter 5 are also UMP invariant under change in scale:(i) The test of g ≤ g0 in a gamma distribution (Problem 5.30).(ii) The test of b1 ≤ b2 in Problem 5.18(i).
is also UMP similar.[Consider the problem of testing α = 0 versus α > 0 in the two-parameter exponential family with density C(α, τ ) exp − α2τ 2x 2 i − 1 − ατ|xi|, 0 ≤ α < 1.]Note. For the analogous result for the tests of Problem 6.15, 6.16, see Quesenberry and Starbuck (1976).
The UMP invariant test of
Let G be a group of transformations of X , and let A be a σ-field of subsets of X , and μ a measure over (X , A). Then a set A ∈ A is said to be almost invariant if its indicator function is almost invariant.(i) The totality of almost invariant sets forms a σ-field A0, and a critical function
Inadmissible likelihood ratio test. In many applications in which a UMP invariant test exists, it coincides with the likelihood ratio test. That this is, however, not always the case is seen from the following example. Let P1,..., Pn be n equidistant points on the circle x 2 + y2 = 4, and Q1,...,
Invariance of likelihood ratio. Let the family of distributions P ={Pθ, θ ∈ } be dominated by μ, let pθ = d Pθ/dμ, let μg−1 be the measure defined by μg−1(A) = μ[g−1(A)], and suppose that μ is absolutely continuous with respect to μg−1 for all g ∈ G.(i) Then pθ(x) = pgθ¯
(i) A generalization of equation (6.2) isA f (x) d Pθ(x) =gA f (g−1x) d Pgθ¯ (x).(ii) If Pθ1 is absolutely continuous with respect to Pθ0 , then Pgθ¯ 1 is absolutely continuous with respect to Pgθ¯ 0 and d Pθ1 d Pθ0(x) = d Pgθ¯ 1 d Pgθ¯ 0(gx)a.e. Pθ0.(iii) The distribution of d
Envelope power function. Let S(α) be the class of all level-α tests of a hypothesis H, and let β∗α(θ) be the envelope power function, defined byβ∗α(θ) = supφ∈S(α)βφ(θ), where βφ denotes the power function of φ. If the problem of testing H is invariant under a group G, then
Consider a testing problem which is invariant under a group G of transformations of the sample space, and let C be a class of tests which is closed under G, so that φ ∈ C implies φg ∈ C, where φg is the test defined by φg(x) = φ(gx). If there exists an a.e. unique UMP member φ0 of C, then
Show that(i) G1 of Example 6.6.2 is a group;(ii) the test which rejects when X2 21/X2 11 > C is UMP invariant under G1;(iii) the smallest group containing G1 and G2 is the group G of Example 6.6.2.
The totality of permutations of K distinct numbers a1,..., aK , for varying a1,..., aK can be represented as a subset CK of Euclidean K-space RK , and the group G of Example 6.5.1 as the union of C2, C3, … . Let ν be the measure over G which assigns to a subset B of G the value ∞k=2 μK (B ∩
Almost invariance of a test φ with respect to the group G of either Problem 6.11(i) or Example 6.3.5 implies that φ is equivalent to an invariant test.
For testing the hypothesis that the correlation coefficient ρ of a bivariate normal distribution is ≤ ρ0, determine the power against the alternative ρ = ρ1, when the level of significance α is .05, ρ0 = .3, ρ1 = .5, and the sample size n is 50, 100, 200.Section 6.5
Let (Xi, Yi) be independent N(μi, σ2) for i = 1,..., n. The parameters μ1,..., μn and σ2 are all unknown. For testing σ = 1 against σ > 1, determine the UMPI level α test. Is the test also UMPU?
Testing a correlation coefficient. Let (X1, Y1), . . . , (Xn, Yn) be a sample from a bivariate normal distribution.(i) For testing ρ ≤ ρ0 against ρ > ρ0 there exists a UMP invariant test with respect to the group of all transformations X i = a Xi +b, Y i = cY1 + d for whicha, c >0. This test
Two-sided t-test.(i) Let X1,..., Xn be a sample from N(ξ, σ2). For testing ξ = 0 against ξ = 0, there exists a UMP invariant test with respect to the group X i = cXi , c = 0, given by the two-sided t-test (5.17).(ii) Let X1,..., Xm, and Y1,..., Yn be samples from N(ξ, σ2) and N(η,
(i) When testing H : p ≤ p0 against K : p > p0 by means of the test corresponding to (6.15), determine the sample size required to obtain powerβ against p = p1, α = 0.05, β = 0.9 for the cases p0 = 0.1, p1 = 0.15, 0.20, 0.25; p0 = 0.05, p1 = 0.10, 0.15, 0.20, 0.25; p0 = 0.01, p1 = 0.02, 0.05,
Let X1,..., Xn be independent and normally distributed. Suppose Xi has mean μi and variance σ2 (which is the same for all i). Consider testing the null hypothesis that μi = 0 for all i. Using invariance considerations, find a UMP invariant test with respect to a suitable group of transformations
Show that the test of Problem 6.10(i) reduces to(i) [x(n) − x(1)]/S < c for normal versus uniform;(ii) [ ¯x − x(1)]/S < c for normal versus exponential;(iii) [ ¯x − x(1)]/[x(n) − x(1)] < c for uniform versus exponential.(Uthoff, 1970.)Note. When testing for normality, one is typically not
Uniform versus triangular.(i) For f0(x) = 1 (0 < x < 1), f1(x) = 2x (0 < x < 1), the test of Problem 6.13 reduces to rejecting when T = x(n)/x¯ < C.(ii) Under f0, the statistic 2n log T is distributed as χ2 2n.(Quesenberry and Starbuck, 1976.)
Normal versus double exponential. For f0(x) = e−x2/2/√2π, f1(x) = e−|x|/2, the test of the preceding problem reduces to rejecting when x 2 i /|xi| < C.(Hogg, 1972.)Note. The corresponding test when both location and scale are unknown is obtained in Uthoff (1973). Testing normality against
Let X1,..., Xn be a sample from a distribution with density 1τ n f x1τ... f xnτ, where f (x) is either zero for x < 0 or symmetric about zero. The most powerful scale-invariant test for testing H : f = f0 against K : f = f1 rejects when ∞0 vn−1 f1(vx1)... f1(vxn) dv ∞0 vn−1
If X1,..., Xn and Y1,..., Yn are samples from N(ξ, σ2) and N(η, τ 2), respectively, the problem of testing τ 2 = σ2 against the two-sided alternatives τ 2 = σ2 remains invariant under the group G generated by the transformations Xi = a Xi +b, Y i = aYi +c, (a = 0), and X i = Yi , Y i = Xi .
Let X1,..., Xm; Y1,..., Yn be samples from exponential distributions with densities for σ−1e−(x−ξ)/σ, for x ≥ ξ, and τ −1e−(y−η)/τ for y ≥ η.(i) For testing τ/σ ≤ against τ/σ > , there exists a UMP invariant test with respect to the group G : X i = a Xi +b, Y j =
(i) Let X = (X1,..., Xn) have probability density (1/θn) f [(x1 −ξ)/θ,...,(xn − ξ)/θ], where −∞ < ξ < ∞, 0 < θ are unknown, and where f is even. The problem of testing f = f0 against f = f1 remains invariant under the transformations x i = axi + b (i = 1,..., n), a = 0, −∞ < b
Let X, Y have the joint probability density f (x, y). Then the integral h(z) = ∞−∞ f (y − z, y)dy is finite for almost all z, and is the probability density of Z = Y − X.[Since P{Z ≤ b} = b−∞ h(z)dz, it is finite and hence h is finite almost everywhere.]
In Example 6.1.1, find a maximal invariant and the UMPI level α test.
Consider the situation of Example 6.3.1 with n = 1, and suppose that f is strictly increasing on (0, 1).(i) The likelihood ratio test rejects if X < α/2 or X > 1 − α/2.(ii) The MP invariant test agrees with the likelihood ratio test when f is convex.(iii) When f is concave, the MP invariant
Prove Theorem 6.3.1(i) by analogy with Example 6.3.1, and(ii) by the method of Example 6.3.2. [Hint: A maximal invariant under G is the set{g1x,..., gN x}.]
(i) A sufficient condition for (6.9) to hold is that D is a normal subgroup of G.(ii) If G is the group of transformations x = ax +b, a = 0, −∞ < b < ∞, then the subgroup of translations x = x + b is normal but the subgroup x = ax is not.[The defining property of a normal subgroup is that
Suppose M is any m × p matrix. Show that MM is positive semidefinite. Also, show the rank of MM equals the rank of M, so that in particular MM is nonsingular if and only if m ≥ p and M is of rank p.
(i) Let X be the totality of points x = (x1,..., xn)for which all coordinates are different from zero, and let G be the group of transformations x i =cxi, c > 0. Then a maximal invariant under G is (sgn xn, x1/xn,..., xn−1/xn)where sgn x is 1 or −1 as x is positive or negative.(ii) Let X be the
Let G be a group of measurable transformations of (X , A) leaving P = {Pθ, θ ∈ } invariant, and let T (x) be a measurable transformation to(T , B). Suppose that T (x1) = T (x2) implies T (gx1) = T (gx2) for all g ∈ G, so that G induces a group G∗ on T through g∗T (x) = T (gx), and suppose
If X, Y are positively regression dependent, they are positively quadrant dependent.[Positive regression dependence implies that P[Y ≤ y | X ≤ x] ≥ P[Y ≤ y | X ≤ x] for all x < x and y, (5.90)and (5.90) implies positive quadrant dependence.]
(i) The functions (5.78) are bivariate cumulative distributions functions.(ii) A pair of random variables with distribution (5.78) is positively regression dependent. [The distributions (5.78) were introduced by Morgenstem (1956).]
If X and Y have a bivariate normal distribution with correlation coefficient ρ > 0, they are positively regression dependent.[The conditional distribution of Y given x is normal with mean η + ρτσ−1(x − ξ)and variance τ 2(l − ρ2). Through addition to such a variable of the positive
If (X1, Y1), . . . , (Xn, Yn) is a sample from a bivariate normal distribution, the probability density of the sample correlation coefficient R is16 pρ(r) = 2n−3π(n − 3)!(1 − ρ2)1 2 (n−1)(1 − r 2)1 2 (n−4) (5.85)×∞k=0 2 1 2 (n + k − 1)(2ρr)k k!or alternatively pρ(r) = n −
(i) Let (X1, Y1), . . . , (Xn, Yn) be a sample from the bivariate normal distribution (5.73), and let S2 1 = (Xi − X¯)2, S12 = (Xi − X¯)(Yi − Y¯), S2 2 = (Yi − Y¯)2.Then (S2 1 , S12, S2 2 ) are independently distributed of (X¯ , Y¯), and their joint distribution is the same as that of
(i) Let (X1, Y1), . . . , (Xn, Yn) be a sample from the bivariate normal distribution (5.69), and let S2 1 = (Xi − X¯)2, S2 2 = (Yi − Y¯)2, S12 =(Xi − X¯)(Yi − Y¯). There exists a UMP unbiased test for testing the hypothesis τ/σ = . Its acceptance region is|2 S2 1 − S2 2 |(2 S2 1 +
(i) If the joint distribution of X and Y is the bivariate normal distribution (5.69), then the conditional distribution of Y given x is the normal distribution with variance τ 2(1 − ρ2) and mean η + (ρτ/σ)(x − ξ).(ii) Let (X1, Y1), . . . , (Xn, Yn) be a sample from a bivariate normal
Generalize Problems 5.60(i) and 5.61 to the case of two groups of sizes m and n (c = 1).
Determine for each of the following classes of subsets of {1,..., n}whether (together with the empty subset) it forms a group under the group operation of the preceding problem: All subsets {i1,...,ir} with(i) r = 2;(ii) r = even;(iii) r divisible by 3;(iv) Give two other examples of subgroups G0
The preceding problem establishes a 1 : 1 correspondence between e − 1 permutations T of G0 which are not the identity and e − 1 nonempty subsets{i1,...,ir} of the set {1,..., n}. If the permutations T and T correspond respectively to the subsets R = {i1,...,ir} and R = {j1,..., js}, then
to the situation of part (i).[Hartigan (1969).]
(i) Given n pairs (x1, y1), . . . , (xn, yn), let G be the group of 2n permutations of the 2n variables which interchange xi and yi in all, some, or none of the n pairs. Let G0 be any subgroup of G, and let e be the number of elements in G0. Any element g ∈ G0 (except the identity) is
Let Z1,..., Zn be i.i.d. according to a continuous distribution symmetric about θ, and let T(1) < ··· < T(M) be the ordered set of M = 2n − 1 subsamples; (Zi1 +···+ Zir )/r, r ≥ 1. If T(0) = −∞, T(M+1) = ∞, then Pθ[T(i) < θ < T(i+1)] =1 M + 1 for all i = 0, 1,..., M.[Hartigan
Showing 500 - 600
of 5757
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last
Step by Step Answers