New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
statistical techniques in business
Testing Statistical Hypotheses 3rd Edition Erich L. Lehmann, Joseph P. Romano - Solutions
Show (14.49).
Use Minkowski’s Inequality (Section A.3) to show (14.47).
Let X1,...,Xn be i.i.d. random variables on [0,1] with unknown distribution P. The problem is to test P = P0, the uniform distribution on [0, 1]. Assume a parametric model with densities of the form (14.35) for some fixed positive integer k. Set T0(x) = 1 and assume the functions T1,...,Tk are
For testing P = P0 in the model of densities (14.35) with Tj the normalized Legendre polynomials, show that Neyman’s smooth test is consistent in power against any distribution P as long as the first k moments of P are not all identical to the first k moments of P0.
Show that (14.42) holds with B = ∞ if V arθ[Tj (X1)] is uniformly bounded in θ. Hint: Argue by contradiction. Suppose there exists hn with|hn| ≥ b such that Ehnn−1/2 (φ∗n) → , where is less than the right side of (14.42). This is a contradiction if Ehnn−1/2 (φ∗n) → 1 if
In Example 14.4.3, show that the multinomal distribution can be written in the form (14.35) for the given orthogonal choice of functions Tj .
Let X1,...,Xn be i.i.d. F, and consider testing the null hypothesis that F is the uniform (0,1) c.d.f. For θ = (θ1, θ2) ∈ RI 2, consider a family of alternative densities of the form pθ(x) = C(θ) exp[θ1T1(x) + θ2T2(x)], 0
Consider the limit distribution of the Chi-squared goodness-offit statistic for testing normality if using the maximum likelihood estimators to estimate the unknown parameters. Specifically, suppose X1,...,Xn are i.i.d. and the problem is to test whether the underlying distribution is N(θ, 1) for
In Example 14.3.1, verify (14.32) and determine the MLE βˆn for the linkage submodel being tested. Determine the limiting distribution of the Chi-squared statistic Qn(βˆn).
The Hardy-Weinberg law says the following. If gene frequencies are in equilibrium, the genotypes AA, Aa, and aa occur in a population with frequencies θ2, 2θ(1−θ), and (1−θ)2. In an i.i.d. sample of size n, with each outcome being an AA, Aa, or aa with the above probabilities, let X1, X2,
Under the setup of Problem 12.61, determine a Chi-squared test statistic, as well as its limiting distribution under the null hypothesis. [For a discussion of the Chi-squared test for testing independence in a two-way table, see Diaconis and Efron (1985) and Loh (1989).]
Verify (14.33).
As in Section 14.3.2, consider the Chi-squared test for testing uniformity on (0, 1) based on k + 1 cells; call if φ∗n,k. Fix any B < ∞ and > 0.Let UB be the set of u with u = 0 and u2 ≤ B. For alternative sequences of the form (14.25) with bn = n−1/2 , show that, if k is large enough
Recall M(k, h) defined by (14.27) and let Fk denote the c.d.f.of the central Chi-squared distribution with k degrees of freedom. Show that M(k, h) = α + γk h2 2 + o(h2) as h → 0 , whereγk = Fk(ck,1−α) − Fk+2(ck,1−α) .
Prove Lemma 14.3.1(i).
Show that the result Theorem 14.3.2 (ii) holds for the likelihood ratio test.
Prove part (iii) of Theorem 14.3.1.
In the multinomial goodness of fit problem, calculate the Information matrix I(p) given by (14.22).
Prove the convergence (14.21).
(i) Verify (14.19).(ii) Verify (14.20).
Let X1, ··· , Xn be a sample from the normal distribution with mean θ and variance 1, with cdf denoted by Fθ(·). Let Φ(z) denote the standard normal cdf, so that Fθ(t) = Φ(t − θ). For any two cdfs F and G, let F − Gdenote supt |F(t) − G(t)|. Let ˆθn be the estimator of θ
Suppose X1,...,Xn are i.i.d. with c.d.f F on the real line. The problem is to test the null hypothesis H0 that the Xi are uniform on (0, θ] for some θ. Let ˆθn = max(X1,...,Xn), and let Fˆn be the empirical distribution function. Let dK(F, G) be the Kolmogorov-Smirnov distance between F and
For testing the null hypothesis that X1,...,Xn are i.i.d. from a normal distribution with unknown mean µ and unknown variance σ2, show that the null distribution of (14.13) does not depend on (µ, σ) (but it does depend on n). Describe a simulation method to approximate this null distribution.
Generalize Theorem 14.2.2 to any EDF test statistic of the form n1/2d(Fˆn, F0), if d is a metric weaker than the Kolmogorov-Smirnov metric dk in the sense d(F, G) ≤ CdK(F, G)for some constant C. In particular, show the result applies to the Cram´er-von Mises test.
Let F be the family of distributions having density F = f on(0, 1) and let F0 = f0 be the uniform density. Consider testing the null hypothesis that F = F0 based on the Kolmogorov Smirnov test. Show that, if dk(f,f0) is the sup distance between densities and 0
Let F0 be the uniform (0,1) c.d.f. and consider testing F = F0 by the Kolmogorov Smirnov test.(i) Construct a sequence of alternatives Fn to F0 satisfying n1/2dK(Fn, F0) → δwith 0 0 with n1/2γn → δ > 0 and let Fn(t) be defined by Fn(t) = 0 if t sn,1−α} ≤ P{n1/2 sup t|Gˆn(t) − t| >
(i) Suppose {Pθ} is q.m.d. at θ0, where Pθ is a probability distribution on RI with corresponding c.d.f. Fθ. Show that there exists B = Bθ0 (h)
Suppose Fn satisfies n1/2dK(Fn, F0) → 0. For testing F = F0 at level α, show that the limiting power of the Kolmogorov Smirnov test against Fn is no better than α. In the case that both Fn and F0 are continuous, show that the limiting power is equal to α.
For testing F = F0, where F0 is the uniform (0,1) c.d.f., consider alternatives Fn to F0 of the form Fn(t) = (1 − λn)F0(t) + λnG(t) , where G = F0 is some fixed distribution. Show that, if λn = λn−1/2, then the limiting power of the Kolmogorov-Smirnov test is bounded away from α if λ is
(i) Let X1,...,Xn be i.i.d. real-valued random variables with c.d.f. F. Consider testing F = F0 against F = F0 based on the KolmogorovSmirnov test. Fix F with n1/2dK(F, F0) > sn,1−α. Show that PF {Tn > sn,1−α} ≥ 1 − 1 4|n1/2dK(F, F0) − sn,1−α|2 .Hint: Use (14.6) and Chebyshev’s
Consider testing the difference of two population meansµ(PX) − µ(PY ) ≤ 0 in a nonparametric setting. Generalize Theorem 13.6.1 to obtain locally AUMP tests.
Let P be the set of all joint distributions in RI 2 on some compact set. Let θ(P) denote the correlation functional. For testing θ(P) ≤ 0, construct an asymptotically optimal test in a nonparametric setting.
Provide the details for the optimality claimed in Example 13.6.3 for testing the variance in a nonparametric setting.
In Theorem 13.6.1, compute the limiting power against Pu,hn−1/2 where h is chosen so that n1/2θ(Pu,hn−1/2 ) → δ. [The solution does not depend on u but only on the value of δ, which was noted by Pfanzagl and Wefelmeyer (1985).]
In Example 13.6.2, argue that the given test attains the optimal limiting power uniformly in h, for 0 ≤ h ≤ c and any c > 0.
Compare the bounds (13.101) and (13.104). For what u is each attainable? Why is (13.101) generally not attainable for all u, even though there exists a test for the submodel {Pu,t} for which the bound is attainable.
Verify (13.101).
Show that the family of densities (13.95) satisfies (11.77) for small enough θ.
Under the conditions of Theorem 13.5.5 used to prove an asymptotic maximin result for Rao’s test, derive analogous optimality results for both the Wald and likelihood ratio tests.Section 13.6
Generalize Example 13.5.7 to the case of testing θ = θ0 versusθ = θ0 in the presence of nuisance parameters.
As in Example 13.5.7, consider testing θ = θ0 versus θ = θ0.Suppose φn is asymptotically level α and asymptotically unbiased in the sense lim inf n Eθ0+hn−1/2 (φn) ≥ αfor any h = 0. Argue that, among such tests φn, the two-sided Rao test φn,2 is LAUMP.
Let C = C(α, δ, σ) be defined by (13.86). Show that C>δ −σz1−α. Use this to show that, in Example 13.5.6, the limiting power of φ∗n always exceeds that of φIUT n .
Show that the size of the TOST test considered in Example 13.5.5 is α.
Consider the one-sample N(µ, 1) problem for testing |µ| ≥ ∆versus |µ| < ∆. Show that the level α test based on combining the two one-sided UMP level α tests has size strictly less than α.
Assume the conditions of Theorem 13.5.1, Consider the problem of testing g(θ) = 0 against g(θ) = 0. Restrict attention to tests φn that are asymptotically unbiased in the sense lim inf n inf{θ: g(θ)=0}Eθ(φn) ≥ α , as well as (13.69). Prove a result analogous to Theorem 13.5.1. Hint: See
Verify (13.76) as well as the form of the matrix C(θ0).
Assume (13.75) and the setup described there. Show that the test that rejects when g(ˆθn) > z1−ασˆn is pointwise level α and has a power function such that there is equality in (13.74).
Derive the inequality (13.74) under general conditions which assume the model is asymptotically normal.
In Example 13.5.3, for testing ρ ≤ 0 versus ρ > 0, find the optimal limiting power of the LAUMP against alternatives hn−1/2. Compare with the case where the means and variances are known. Generalize to the case of testing ρ ≤ ρ0 against ρ>ρ0.
For the location scale model in Problem 13.45, show that, for testing µ ≤ 0 versus µ > 0, argue that the Wald test is LAUMP if β ≥ 1. If ˆσn is replaced by any consistent estimator of σ, does the LAUMP property continue to hold? If 1/2
For the location scale model of Example 13.5.2 with f(x) =C(β) exp[−|x|β], argue that the family is q.m.d. if β > 1/2.
In the location scale model of Example 13.5.2, verify the expressions for the Information matrix. Deduce that the matrix is diagonal if f is an even function.
Let dN(h, C) denote the density of the normal distribution with mean vector h ∈ RI k and positive definite covariance matrix C. Prove that exp(h, x − 1 2 h, Ch)dN(0, C)(x) is the density of N(Ch, C) evaluated at x.Hint: Use characteristic functions.Section 13.5
Assume {Qn,h, h ∈ RI k } is asymptotically normal according to Definition 13.4.1, with Zn and C satisfying (13.62). Show that, under Qn,h, Zn d→ N(Ch, C).
Suppose {Qn,h, h ∈ RI k } is asymptotically normal. Show that Qn,h1 and Qn,h2 are mutually contiguous for any h1 and h2.
Suppose {Qn,h, h ∈ RI k } is asymptotically normal according to Definition 13.4.1, with Zn and C satisfying (13.62). Show the matrix C is uniquely determined. Moreover, if Z˜n is any other sequence also satisfying (13.62), then Zn − Z˜n → 0 in Qn,h-probability for any h.
Define appropriate extensions of the definitions of LAUMP and AUMP to two-sided testing of a real parameter. Let X1,...,Xn be i.i.d. N(θ, 1).Show that neither LAUMP nor AUMP tests exist for testing θ = 0 against θ = 0.Section 13.4
Suppose X1, ...Xn are i.i.d. N(θ, 1+θ2). Consider testing θ = θ0 versus θ>θ0 and let φn be the test that rejects when n1/2[X¯n − θ0] > z1−α(1 +θ2 0)1/2.(i) Compute the limiting power of this test against θ0 + hn−1/2.(ii) Is this test AUMP?
Suppose X1, ...Xn are i.i.d. Poisson(λ). Consider testing the null hypothesis H0 : λ = λ0 versus the alternative, HA : λ>λ0.(i) Consider the test φ1 n with rejection region n1/2[X¯n − λ0] > z1−αλ1/2 0 , whereΦ(zα) = α and Φ is the cdf of a standard normal random variable. Find the
Assume the conditions of Example 13.3.1. Further assume f is strongly unimodal, i.e., − log(f) is convex. Show the test φ˜n given by (13.43) is AUMP level α. Hint: Use Problem 13.35.
Assume the conditions of Theorem 13.3.3. Assume φn is LAUMP level α. Suppose the power function of φn is nondecreasing in θ, forθ ≥ θ0. Show φn is also AUMP level α.
Let X1,...,Xn be i.i.d. according to a q.m.d. location model f(x − θ). Let ˆθn be any location equivariant estimator satisfying (13.58) (such as an efficient likelihood estimator). For testing θ ≤ 0 against θ > 0, show that the test that rejects when n1/2 ˆθn > I−1/2(0)z1−α is AUMP.
For the Cauchy location model of Example 13.3.3, consider the estimator ˆθn defined by (13.59). Show that the test that rejects when n1/2 ˆθn >21/2z1−α is AUMP. Is the estimator location equivariant? Is the estimator ˆθn = ˆθn(X1,...,Xn) monotone in the sense it is nondecreasing as any
In the double exponential location model of Example 13.3.2, show that a MLE estimator is a sample median ˆθn. The test that rejects the null hypothesis if n1/2 ˆθn > z1−α is AUMP and is asymptotically equivalent to Rao’s score test in the sense of Problem 13.24
Suppose Zn is any sequence of random variables such that V arθn (Zn) ≤ 1 while Eθn (Zn) → ∞. Here, θn merely indicates the distribution of Zn at time n. Show that, under θn, Zn → ∞ in probability.
For testing θ0 versus θn, let φ∗n be a test satisfying lim sup n Eθ0 (φ∗n) = α∗ < αand Eθn (φ∗n) → β∗.(i) Show there exists a test sequence ψn satisfying lim supn Eθ0 (ψn) = α and a number β such that lim Eθn (ψn) = β ≥ β∗ , and this last inequality is strict
Prove the equivalence of Definition 13.3.2 and the definition in the statement immediately following Definition 13.3.2. What is an equivalent characterization for LAUMP tests?
Prove Theorem 13.3.1.
Prove Lemma 13.3.1 (iii). Hint: Problems 13.12-13.13.
Let X1,...,Xn be i.i.d. N(θ, 1). For testing θ = 0 against θ > 0, let φn be the UMP level α test. Let φ˜n be the test which rejects if X¯n ≥ bn/n1/2 or X¯n ≤ −an/n1/2, where bn = z1−α + n−1/4 and an is then determined to meet the level constraint. Are the tests asymptotically
for testing θ0 against θ0 + hn−1/2.
Under the q.m.d. assumptions of this section, show that φn,h given by (13.34) and φ˜n given by (13.43) are asymptotically equivalent in the sense of
For testing θ = θ0 versus θ>θ0, define two test sequencesφn and ψn to be asymptotically equivalent under the null hypothesis if φn −ψn → 0 in probability under θ0. Does this imply that, if θ0 is the true value, the probability the tests reach the same conclusion tends to 1? Show that,
Suppose X1,...,Xn are i.i.d. N(0, σ2). Let Tn,1 = Y¯n =n−1 n i=1 Yi, where Yi = X2 i . Also, let Tn,2 = (2n)−1 n i=1(Yi −Y¯n)2. For testingσ = 1 versus σ > 1, does the Pitman asymptotic relative efficiency of Tn,1 with respect to Tn,2 exist? If so, find it.Section 13.3
Suppose X1,...,Xn are i.i.d. Poisson with unknown mean θ.The problem is to test θ = θ0 versus θ>θ0. Consider the test that rejects for large X¯n and the test that rejects for large S2 n = 1 n − 1n i=1(Xi − X¯n)2 .Compute the Pitman ARE.
Prove the inequality (13.30). Hint: The quantity (13.29) is invariant with respect to scale. By taking σ2 = 1, the problem reduces to choosing f to minimize f 2 subject to f being a mean 0 density with variance 1. Using the method of undetermined multipliers, it is sufficient to minimize[f
For a double exponential location family, calculate the Pitman AREs among pairwise comparisons of the t-test, the Wilcoxon test, and the Sign test.
Suppose Ω0 = {θ0}. In order to determine c = c(n, α) in(13.32), define c(n, α) to be c(n, α) = inf{d : Pθ0 {Tn > d} ≤ α} .Argue that this choice of c(n, α) satisfies (13.32). What if Tn > d is replaced by Tn ≥ d ?
Under the assumptions of Example 13.2.1, show that the squared efficacy of the Wald test is I(θ0).
Under the assumptions of Theorem 13.2.1, suppose θk → θ0 and β>α> 0. Show, for any N < ∞, there does not exist a test φk with k ≤ N such that lim infk Eθk (φk) ≥ β.
Let f(x) be the triangular density on [−1, 1] defined by f(x) = (1 − |x|)I{x ∈ [−1, 1]} .Let Pθ be the distribution with density f(x−θ). Find the asymptotic behavior of H(Pθ0 , Pθ0+h) as h → 0, where H is the Hellinger distance. Compare your result with q.m.d. families.Section 13.2
Let Pn and Qn be two sequences of probability measures defined on (Ωn, Fn). Assume they are contiguous. Assume further that both of them are product measures, i.e.Pn = n i=1 Pn,i and Qn = n i=1 Qn,i .Let Q − P1 denote the total variation distance between P and Q. Show that sup nn i=1Qn,i
Give an example where Qn − Pn1 → δ > 0 but Pn and Qn are mutually contiguous.
Use problem 13.11 to prove Theorem 12.4.1 when hn → h.
to show that Theorem 12.2.3 (i) remains valid if h is replaced by hn as long as hn falls in a bounded subset of RI k . Then, show that, for any c > 0, the supremum over h such that |h| ≤ c of the left side of (12.13) tends to 0 in probability under θ0. Also, show part (ii) of Theorem 12.2.3
Use
For a q.m.d. family, show nH2(Pθ0+hn−1/2 , Pθ0+hnn−1/2 ) → 0 whenever hn → h. Then, show P nθ0+hnn−1/2 is contiguous to P nθ0 whenever hn →h.
Suppose Pn − Qn1 → 0. Show that Pn and Qn are mutually contiguous. Furthermore, show that, for any sequence of test functions φn, φndPn − φndQn → 0.
Let X1,...,Xn be i.i.d. according to a model {Pθ, θ ∈ Ω}, whereθ is real-valued. Consider testing θ = θ0 versus θ = θn at level α (α fixed, 0 0.
If I(θ0) is a positive definite Information matrix, show h = 0 if and only if h, I(θ0)h = 0.
Let Pθ be N(θ, 1). Fix h and let θn = hn−1/2. Compute S(P n 0 , P nθn ) and its limiting value. Compare your result with the upper bound obtained from Theorem 13.1.3.
Consider testing P nθ0 versus P nθn and assume nH2(Pθ0 , Pθn ) →0. Let φn be any test sequence such that lim sup Eθ0 (φn) ≤ α. Show that lim sup Eθn (φn) ≤ α.
Prove Lemma 13.1.1.
Let Pθ be uniform on [0, θ]. Let θn = θ0 + h/n. Calculate the limit of nH2(Pθ0 , Pθ0+h/n). If h > 0, let φn be the UMP level α test which rejects when the maximum order statistic is too large. Evaluate the limit of the power of φn against the alternative θn.
(i) Suppose X is a random variable taking values in a sample space S with probability law P. Let ω0 and ω1 be disjoint families of probability laws. Assume that, for every Q ∈ ω1 and any > 0, there exists a subset A of S (which may depend on ) such that Q(A) ≥ 1 − and such that, if X
Show that P1 − P01 can also be computed as 2 supB|P1(B) − P0(B)| ,where the supremum is over all measurable sets B. In addition, it may be computed as sup {φ:|φ|≤1}%%%%φ(x)dP1(x) −φ(x)dP0(x)%%%% , where the supremum is over all measurable functions φ such that supx |φ(x)| ≤ 1.
(i). Let Pi have density pi with respect to a dominating measureµ. Show that P1 − P01 defined by |p1 − p0|dµ is independent of the choice ofµ and is a metric.(ii). Show the Hellinger distance defined in (13.12) is also independent of µ and is a metric.
Let (Xj,1, Xj,2), j = 1,...,n be independent pairs of independent exponentially distributed random variables with E(Xj,1) = θλj and E(Xj,2) = λj . Here, θ and the λj are all unknown. The problem is to test θ = 1 against θ > 1. Compare the Rao, Wald, and likelihood ratio tests for this
Suppose X1,...,Xn are i.i.d. N(θ, 1). Consider Hodges’ superefficient estimator of θ (unpublished, but cited in Le Cam (1953)), defined as follows Let ˆθn be 0 if |X¯n| ≤ n−1/4; otherwise, let ˆθn = X¯n. For any fixed θ, determine the limiting distribution of n1/2(ˆθn − θ).
Consider the third of the three sampling schemes for a 2×2×K table discussed in Section 4.8, and the two hypotheses H1 : ∆1 = ··· = ∆K = 1 and H2 : ∆1 = ··· = ∆K.(i) Obtain the likelihood-ratio test statistic for testing H1.(ii) Obtain equations that determine the maximum likelihood
Showing 1300 - 1400
of 5757
First
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Last
Step by Step Answers