New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
statistical techniques in business
Testing Statistical Hypotheses 3rd Edition Erich L. Lehmann, Joseph P. Romano - Solutions
Let (Yi, Zi) be i.i.d. bivariate random vectors in the plane, with both Yi and Zi assumed to have finite nonzero variances. Let µY = E(Y1) andµZ = E(Z1), let ρ denote the correlation between Y1 and Z1, and let ˆρn denote the sample correlation, as defined in (11.29).(i). Under the assumption
Generalize the previous problem to the two-sample t-test.
(i) Let X1,...,Xn be a sample from N(ξ, σ2). For testing ξ = 0 against ξ > 0, show that the power of the one-sided one-sample t-test against a sequence of alternatives N(ξn, σ2) for which n1/2ξn/σ → δ tends to 1−Φ(z1−α −δ).(ii) The result of (i) remains valid if X1,...,Xn are a
Show that Xn → X in probability is equivalent to the statement that, for any subsequence Xnj , there exists a further subsequence Xnjk such that Xnjk → X with probability one.Section 11.3
(i) If X1,...,Xn are i.i.d. with c.d.f. F and empirical distribution Fˆn, use Theorem 11.2.18 to show that n1/2 sup |Fˆn(t) − F(t)| is a tight sequence.(ii) Let Fn be any sequence of distributions, and let Fˆn be the empirical distribution based on a sample of size n from Fn. Show that n1/2
Show how Theorem 11.2.18 implies Theorem 11.2.17. Hint: Use the Borel-Cantelli Lemma; see Billingsley (1995, Theorem 4.3).
Consider the uniform confidence band Rn,1−α for F given by(11.36). Let F be the set of all distributions on RI . Show, inf F ∈F PF {F ∈ Rn,1−α} ≥ 1 − α .
Let U1,...,Un be i.i.d. with c.d.f. G(u) = u and let Gˆn denote the empirical c.d.f. of U1,...,Un. Define Bn(u) = n1/2[Gˆn(u) − u] .(Note that Bn(·) is a random function, called the uniform empirical process).(i) Show that the distribution of the Kolmogorov-Smirnov test statistic n1/2dK(Gˆn,
For a c.d.f. F, define the quantile transformation Q by Q(u) = inf{t : F(t) ≥ u} .(i) Show the event {F(t) ≥ u} is the same as {Q(u) ≤ t}.(ii) If U is uniformly distributed on (0, 1), show the distribution of Q(U) is F.
Suppose Xn is a tight sequence and Yn P→ 0. Show that XnYn P→ 0. If it is assumed Yn → 0 almost surely, can you conclude XnYn → 0 almost surely?
Let X1,...,Xn be i.i.d. P on S. Suppose S is countable and let E be the collection of all subsets of S. Let Pˆn be the empirical measure; that is, for any subset E of E, Pˆn(E) is the proportion of observations Xi that fall in E. Prove, with probability one, sup E∈E|Pˆn(E) − P(E)| → 0 .
Prove the Glivenko-Cantelli Theorem. Hint: Use the Strong Law of Large Numbers and the monotonicity of F.
Let Xi,j , 1 ≤ i ≤ I, 1 ≤ j ≤ n be independent with Xi,j Poisson with mean λi. The problem is to test the null hypothesis that the λi are all the same versus they are not all the same. Consider the test that rejects the null hypothesis iff T ≡ nI i=1(X¯i − X¯)2 X¯is large, where
Let X1,...,Xn be a random sample from the Poisson distribution with unknown mean λ. The uniformly minimum variance unbiased estimator(UMVUE) of exp(−λ) is known to be [(n − 1)/n]Tn , where Tn = n i=1 Xi. Find the asymptotic distribution of the UMVUE (appropriately normalized). Hint: It may
Let X1, ··· , Xn be i.i.d. Poisson with mean λ. Consider estimating g(λ) = e−λ by the estimator Tn = e−X¯n . Find an approximation to the bias of Tn; specifically, find a function b(λ) satisfying Eλ(Tn) = g(λ) + n−1 b(λ) + O(n−2)as n → ∞. Such an expression suggests a new
to suggest a test.
Suppose Xi,j are independently distributed as N(µi, σ2 i ); i =1,...,s; j = 1,...,ni. Let S2 n,i = j (Xi,j − X¯i)2, where X¯i = n−1 ij Xi,j . Let Zn,i = log[S2 n,i/(ni − 1)]. Show that, as ni → ∞,√ni − 1[Zn,i − log(σ2 i )] d→ N(0, 2) .Thus, for large ni, the problem of
(i) If X1,...,Xn is a sample from a Poisson distribution with mean E(Xi) = λ, then √n(√X¯ − √λ) tends in law to N(0, 1 4 ) as n → ∞.(ii) If X has the binomial distribution b(p, n), then √n[arcsin X/n−arcsin √p]tends in law to N(0, 1 4 ) as n → ∞.Note. Certain refinements of
Assume (Ui, Vi) is bivariate normal with correlation ρ. Let ˆρn denote the sample correlation given by (11.29). Verify the limit result (11.31).
to prove (11.28).
Use
Suppose R is a real-valued function on RI k with R(y) = o(|y|p)as |y| → 0, for some p > 0. If Yn is a sequence of random vectors satisfying|Yn| = oP (1), then show R(Yn) = oP (|Yn|p). Hint: Let g(y) = R(y)/|y|p with g(0) = 0 so that g is continuous at 0; apply the Continuous Mapping Theorem.
Prove part (ii) of Theorem 11.2.14.
Let X1, ··· , Xn be i.i.d. normal with mean θ and variance 1.Suppose ˆθn is a location equivariant sequence of estimators such that, for every fixed θ, n1/2(ˆθn−θ) converges in distribution to the standard normal distribution(if θ is true). Let X¯n be the usual sample mean. Show that,
Suppose Xn d→ N(µ, σ2). (i). Show that, for any sequence of numbers cn, P(Xn = cn) → 0. (ii). If cn is any sequence such that P(Xn > cn) →α, then cn → µ + σz1−α, where z1−α is the 1 − α-quantile of N(0, 1).
Suppose Pn is a sequence of probabilities and Xn is a sequence of real-valued random variables; the distribution of Xn under Pn is denoted L(Xn|Pn). Prove that L(Xn|Pn) is tight if and only if Xn/an → 0 in Pn-probability for every sequence an ↑ ∞.
Show that tightness of a sequence of random vectors in RI k is equivalent to each of the component variables being tight RI .
Show how the interval (11.25) is obtained from (11.24).
In Example 11.2.7, let In be the interval (11.23). Show that, for any n, inf p Pp{p ∈ Iˆn} = 0 .Hint: Consider p positive but small enough so that the chance that a sample of size n results in 0 successes is nearly 1.
In Example 11.2.5, show that βn(pn) → 1 if n1/2(pn − 1/2) →∞ and βn(pn) → α if n1/2(pn − 1/2) → 0.
(i) Prove Corollary 11.2.3.(ii) Suppose Xn d→ X and Cn P→ ∞. Show P{Xn ≤ Cn} → 1.
If Xn is a sequence of real-valued random variables, prove that Xn → 0 in Pn-probability if and only if EPn [min(|Xn|, 1)] → 0.
As in Example 11.2.4, consider the problem of testing P = P0 versus P = P1 based on n i.i.d. observations. The problem is an alternative way to show that a most powerful level α (0
(i) Let K(P0, P1) be the Kullback-Leibler Information, defined in (11.21). Show that K(P0, P1) ≥ 0 with equality iff P0 = P1.(ii) Show the convergence (11.20) holds even when K(P0, P1) = ∞. Hint: Use Problem 11.32.
Suppose X1,...,Xn are i.i.d. real-valued random variables.Write Xi = X+i − X−i , where X+i = max(Xi, 0). Suppose X−i has a finite mean, but X+i does not. Let X¯n be the sample mean. Show X¯n P→ ∞. Hint: For B > 0, let Yi = Xi if Xi ≤ B and Yi = B otherwise; apply the Weak Law to Y¯n.
Suppose Xn is a sequence of random vectors.(i). Show Xn P→ 0 if and only if |Xn| P→ 0 (where the first zero refers to the zero vector and the second to the real number zero).(ii). Show that convergence in probability of Xn to X is equivalent to convergence in probability of their components to
Suppose Xn and X are real-valued random variables (defined on a common probability space). Prove that, if Xn converges to X in probability, then Xn converges in distribution to X. Show by counterexample that the converse is false. However, show that if X is a constant with probability one, then Xn
If Xn P→ 0 and sup nE[|Xn|1+δ] < ∞ for some δ > 0 , (11.90)then show E[|Xn|] → 0. More generally, if the Xn are uniformly integrable in the sense supn E[|Xn|I{|Xn| > t}] → 0 as t → ∞, then (11.90) holds. [A converse is given in Dudley (1989), p.279.]
Give an example of an i.i.d. sequence of real-valued random variables such that the sample mean converges in probability to a finite constant, yet the mean of the sequence does not exist.
(Chebyshev’s Inequality). (i) Show that, for any real-valued random variable X and any constants a > 0 and c, E(X − c)2 ≥ a2 P{|X − c| ≥ a} .(ii). Hence, if Xn is any sequence of random variables and c is a constant such that E(Xn − c)2 → 0, then Xn → c in probability. Give a
(Markov’s Inequality) Let X be a real-valued random variable with X ≥ 0. Show that, for any t > 0, P{X ≥ t} ≤ E[XI{X ≥ t}]t ≤ E(X)t ;here I(X ≥ t) is the indicator variable that is 1 if X ≥ t and is 0 otherwise.
Prove part (ii) of Lemma 11.2.1.
Construct a sequence of distribution functions {Fn} on the real line such that Fn converges in distribution to F, but the convergence F −1 n (1 −α) → F −1(1 − α) fails, even if F is assumed continuous. On the other hand, if F is assumed continuous (but not necessarily strictly
Suppose F and G are two probability distributions on RI k. Let L be the set of (measurable) functions f from RI k to RI satisfying |f(x)−f(y)| ≤|x − y|, where |·| is the usual Euclidean norm. Define the Bounded-Lipschitz Metric asλ(F, G) = sup{|EF f(X) − EGf(X)| : f ∈ L} .Show that Fn
Let Fn and F be c.d.f.s on RI . Show that weak convergence of Fn to F is equivalent to ρL(Fn, F) → 0, where ρL is the L´evy metric.
For cumulative distribution functions F and G on the real line, define the Kolmogorov-Smirnov distance between F and G to be dK(F, G) = supx|F(x) − G(x)| .Show that dK(F, G) defines a metric on the space of distribution functions; that is, show dK(F, G) = dK(G, F), dK(F, G) = 0 implies F = G and
Show that ρL(F, G) defined in Definition 11.2.3 is a metric;that is, show ρL(F, G) = ρL(G, F), ρL(F, G) = 0 if and only if F = G, andρL(F, G) ≤ ρL(F, H) + ρL(H, G) .
Prove Poly´a’s Theorem 11.2.9. Hint: First consider the case of distributions on the real line.
Suppose X1,...,Xn are i.i.d. real-valued random variables with c.d.f. F. Assume ∃θ1 < θ2 such that F(θ1)=1/4, F(θ2)=3/4, and F is differentiable, with density f taking positive values at θ1 and θ2. Show that the sample inter-quartile range (defined as the difference between the .75 quantile
Let X1,...,Xn be i.i.d. normal with mean θ and variance 1.Let X¯n be the usual sample mean and let X˜n be the sample median. Let pn be the probability that X¯n is closer to θ than X˜n is. Determine limn→∞ pn.
Generalize Theorem 11.2.8 to the case of the pth sample quantile.
Complete the proof of Theorem 11.2.8 by considering n even.
Let X1,...,Xn be i.i.d. with density p0 or p1, and consider testing the null hypothesis H that p0 is true. The MP level-α test rejects whenΠn i=1r(Xi) ≥ Cn, where r(Xi) = pi(Xi)/p0(Xi), or equivalently when 1√n-log r(Xi) − E0[log r(Xi)].≥ kn. (11.89)(i) Show that, under H, the left side
Suppose Xn,1,...,Xn,n are i.i.d. Bernoulli trials with success probability pn. If pn → p ∈ (0, 1), show that n1/2[X¯n − pn] d→ N(0, p(1 − p)) .Is the result true even if p is 0 or 1?
Suppose Xk is a noncentral chi-squared variable with k degrees of freedom and noncentrality parameter δ2 k. Show that (Xk − k)/(2k)1/2 d→N(µ, 1) if δ2 k/(2k)1/2 → µ as k → ∞.
Show that Lyapounov’s Central Limit Theorem (Corollary 11.2.1) follows from the Lindeberg Central Limit Theorem (Theorem 11.2.5).
Show that Theorem 11.2.3 follows from Theorem 11.2.2.
Let Xn have characteristic function ζn. Find a counterexample to show that it is not enough to assume ζn(t) converges (pointwise in t) to a function ζ(t) in order to conclude that Xn converges in distribution.
Verify (11.9).
Show that the characteristic function of a sum of independent real-valued random variables is the product of the individual characteristic functions. (The converse is false; counterexamples are given in Romano and Siegel(1986), Examples 4.29-4.30.)
Suppose Xn d→ X. Show that Ef(Xn) need not converge to Ef(X) if f is unbounded and continuous, or if f is bounded but discontinuous.
Prove the equivalence of (i) and (vi) in the Portmanteau Theorem (Theorem 11.2.1).
Show that x = (x1,...,xk)T is a continuity point of the distribution FX of X if the boundary of the set of (y1,...,yk) such that yi ≤ xi for all i has probability 0 under the distribution of X. Show by example that it is not sufficient for x to have probability 0 under FX in order for x to be a
Let X be N(0, 1) and Y = X. Determine the set of continuity points of the bivariate distribution of (X, Y ).
For a univariate c.d.f. F, show that the set of points of discontinuity is countable.
For each θ ∈ Ω, let fn(θ) be a real-valued sequence. We say fn(θ)converges uniformly (in θ) to f(θ) if supθ∈Ω|fn(θ) − f(θ)| → 0 as n → ∞. If Ω if a finite set, show that the pointwise convergence fn(θ) → f(θ)for each fixed θ implies uniform convergence. However, show
The nonexistence of (i) semirelevant subsets in Example 10.4.1 and (ii) relevant subsets in Example 10.4.2 extends to randomized conditioning procedures.
Instead of conditioning the confidence sets θ ∈ S(X) on a set C, consider a randomized procedure which assigns to each point x a probabilityψ(x) and makes the confidence statement θ ∈ S(x) with probability ψ(x) when x is observed.7(i) The randomized procedure can be represented by a
Suppose X1 and X2 are i.i.d. with P{Xi = θ − 1} = P{Xi = θ + 1} = 1 2 .Let C be the confidence set consisting of the single point (X1 +X2)/2 if X1 = X2 and X1 − 1 if X1 = X2. Show that, for all θ, Pθ{θ ∈ C} = .75 ,but Pθ{θ ∈ C|X1 = X2} = .5 and Pθ{θ ∈ C|X1 = X2} = 1 .[Berger and
(i) Under the assumptions of the preceding problem, the uniformly most accurate unbiased (or invariant) confidence intervals for θat confidence level 1 − α areθ = max(X(1) +d, X(n)) − 1 1/2n−1.(ii) The sets C1 : X(n) − X(1) > d and C2 : X(n) − X(1) < 2d − 1 are relevant subsets with
Let X1,...,Xn be independently distributed according to the uniform distribution U(θ, θ + 1).(i) Uniformly most accurate lower confidence bounds θ for θ at confidence level 1 − α exist and are given byθ = max(X(1) − k, X(n) − 1), where X(1) = min(X1,...,Xn), X(n) = max(X1,...,Xn), and
Let X have probability density f(x − θ), and suppose that E|X| < ∞. For the confidence intervals X − c
Let X be a random variable with cumulative distribution function F. If E|X| < ∞, then 0−∞ F(x) dx and ∞0 [1 − F(x)] dx are both finite.[Apply integration by parts to the two integrals.]
(i) Verify the posterior distribution of Θ given x claimed in Example 10.4.1.(ii) Complete the proof of (10.32).
In Example 10.4.1, check directly that the set C = {x : x ≤−k or x ≥ k} is not a negatively biased semirelevant subset for the confidence intervals (X −c, X + c).
In Example 10.3.3,(i) the problem remains invariant under G but not under G;(ii) the statistic D is ancillary.Section 10.4
Let V1,...,Vn be independently distributed as N(0, 1), and given V1 = v1,..., Vn = vn, let Xi (i = 1,...,n) be independently distributed as N(θvi, 1).(i) There does not exist a UMP test of H : θ = 0 against K : θ > 0.(ii) There does exist a UMP conditional test of H against K given the
Let X1,...,Xm and Y1,...,Yn be positive, independent random variables distributed with densities f(x/σ) and g(y/τ ) respectively. If f and g have monotone likelihood ratios in (x, σ) and (y, τ ) respectively, there exists a UMP conditional test of H : τ /σ ≤ ∆0 against τ /σ > ∆0 given
Let the real-valued function f be defined on an open interval.(i) If f is logconvex, it is convex.(ii) If f is strongly unimodal, it is unimodal.
Verify the density (10.16) of Example 10.3.2.
Suppose X = (U, Z), the density of X factors into pθ,ϑ(x) = c(θ, ϑ)gθ(u; z)hϑ(z)k(u, z),and the parameters θ, ϑ are unrelated. To see that these assumptions are not enough to insure that Z is S-ancillary for θ, consider the joint density C(θ, ϑ)e − 1 2 (u−θ)2− 1 2 (z−ϑ)2 I(u,
In the situation of Example 10.2.2, the statistic Z remains Sancillary when the parameter space is Ω = {(λ, µ) : µ ≤ λ}.
In the situation of Example 10.2.3, X + Y is binomial if and only if ∆ = 1.
Assuming the distribution (4.22) of Section 4.9, show that Z is S-ancillary for p = p+/(p+ + p−).
A sample of size n is drawn with replacement from a population consisting of N distinct unknown values {a1,...,aN }. The number of distinct values in the sample is ancillary.
Let X, Y have joint density p(x, y)=2f(x)f(y)F(θxy), where f is a known probability density symmetric about 0, and F its cumulative distribution function. Then(i) p(x, y) is a probability density.(ii) X and Y each have marginal density f and are therefore ancillary, but(X, Y ) is not.(iii) X · Y
Let X be uniformly distributed on (θ, θ + 1), 0 v. [Basu (1964).]
In the preceding problem, suppose the probabilities are given by 12 345 6 1−θ6 1−2θ6 1−3θ6 1+θ6 1+2θ6 1+3θ6 Exhibit two different maximal ancillaries.
Consider n tosses with a biased die, for which the probabilities of 1,..., 6 points are given by 123456 1−θ12 2−θ12 3−θ12 1+θ12 2+θ12 3+θ12 and let Xi be the number of tosses showing i points.(i) Show that the triple Z1 = X1 + X5, Z2 = X2 + X4, Z3 = X3 + X6 is a maximal ancillary;
An experiment with n observations X1,...,Xn is planned, with each Xi distributed as N(θ, 1). However, some of the observations do not materialize (for example, some of the subjects die, move away, or turn out to be unsuitable). Let Ij = 1 or 0 as Xj is observed or not, and suppose the Ij are
Let X, Y be independently normally distributed as N(θ, 1), and let V = Y − X and W = Y − X if X + Y > 0, X − Y if X + Y ≤ 0.(i) Both V and W are ancillary, but neither is a function of the other.(ii) (V,W) is not ancillary. [Basu (1959).]
In the preceding problem, suppose that the densities of X under E and F are θe−θx and (1/θ)e−x/θ respectively. Compare the UMP conditional and unconditional tests of H : θ = 1 against K : θ > 1.Section 10.2
With known probabilities p and q perform either E or F, with X distributed as N(θ, 1) under E or N(−θ, 1) under F. For testing H : θ = 0 against θ > 0 there exist a UMP unconditional and a UMP conditional level-αtest. These coincide and do not depend on the value of p.
Let X1,...,Xn be independently distributed, each with probability p or q as N(ξ, σ2 0) or N(ξ, σ2 1).(i) If p is unknown, determine the UMP unbiased test of H : ξ = 0 against K : ξ > 0.(ii) Determine the most powerful test of H against the alternative ξ1 when it is known that p = 1 2 , and
The test given by (10.3), (10.8), and (10.9) is most powerful under the stated assumptions.
Under the assumptions of Problem 10.1, determine the most accurate invariant (under the transformation X = −X) confidence sets S(X)with P(ξ ∈ S(X) | E) + P(ξ ∈ S(X) | F)=2γ.Find examples in which the conditional confidence coefficients γ0 given E and γ1 given F satisfy(i) γ0 < γ1;
Let the experiments of E and F consist in observing X : N(ξ, σ2 0)and X : N(ξ, σ2 1) respectively (σ0 < σ1), and let one of the two experiments be performed, with P(E) = P(F) = 1 2 . For testing H : ξ = 0 against ξ = ξ1, determine values σ0, σ1, ξ1, and α such that(i) α0 < α1; (ii)
In the regression model of Problem 7.8, generalize the confidence bands of Example 9.5.3 to the regression surfaces(i) h1(e1,...,es) = s j=1 ejβj ;(ii) h2(e2,...,es) = β1 + s j=2 ejβj .
to the set of all contrasts.[Use the fact that the event |yi − y0| ≤ ∆ for i = 1,...,s is equivalent to the event |s i=0 ciyi| ≤ ∆s i=1 |ci| for all (c0,...,cs) satisfying s i=0 ci = 0.]Note. As is pointed out in Problems 9.26(iii) and 9.32, the intervals resulting from the extension
In generalization of Problem 9.30, show how to extend the Dunnett intervals of
Showing 1500 - 1600
of 5757
First
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Last
Step by Step Answers