New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
statistical techniques in business
Testing Statistical Hypotheses Volume I 4th Edition E.L. Lehmann, Joseph P. Romano - Solutions
In Example 12.4.5, show the convergence (12.60).Section 12.5
Consider the setup of Example 12.4.3.(i) Find the joint limiting distribution of n−1 i=1 (Yi,1, Yi,0), suitably normalized.(ii) Let Rˆn = n−1 i=1 Yi,1/n−1 i=1 Yi,0, which is the proportion of successes following a success. Show that √n(Rˆn − p) d→ N(0, 1 − p).
Generalize Theorem 12.4.1 to the case where the Xi are vectorvalued.
Suppose X is a stationary process with mean μ and covariance function R(k). Assume R(k) → 0 as k → ∞. Show X¯ n P→ μ. (A sufficient condition for R(k) → 0 is X is strongly mixing with E(|X1|2+δ) < ∞; see Problem 12.33.)
Assume X is stationary, E(|X1|2+δ) < ∞ for some δ > 0 and(12.56) holds. Show that (12.55) holds, and hence R(k) → 0 as k → ∞.
Verify (12.54).
In Example 12.4.2 with the j having finite variance, derive the formulae for the mean and covariance (12.52) of the process.
Verify (12.48).
Consider a U-statistic of degree 2, based on a kernel h. Let h1(x) =E[h(x, X2)] and ζ1 = V ar[h1(X1)]. Assume ζ1 > 0, so that we know that √n[Un −θ(P)] converges in distribution to the normal distribution with mean 0 and variance 4ζ1. Consider estimating the limiting variance 4ζ1. Since Un
Let X1,..., Xn be i.i.d. P. Consider estimating θ(P) defined byθ(P) = E[h(X1,..., Xb)] , where h is a symmetric kernel. Assume P is such that E[h2(X1,..., Xb)] < ∞, so that θ(P) is also well-defined. Let Un be the corresponding U-statistic defined by(12.24). Let Pˆn be the empirical measure,
Let X1,..., Xn be i.i.d. P. Consider estimating θ(P) defined byθ(P) = E[h(X1,..., Xb)] , where h is a symmetric kernel. Assume P is such that E|h(X1,..., Xb)| < ∞, so thatθ(P) is also well-defined. Show that Un P→ θ(P). In fact, E|Un − θ(P)| → 0. Hint:First how Un is consistent by
Consider testing the null hypothesis that a sample X1,..., Xn is i.i.d. against the alternative that the distributions of the Xi are stochastically increasing. Mann (1945) proposed the test which rejects for large values of N, where N is the number of pairs (Xi, X j) with i < j and Xi < X j .
In Example 12.3.7, find F and G so that ζ0,1 and ζ1,0 are not 1/12, even when P{X ≤ Y } = 1/2. Explore how large the rejection probability of the test with rejection region (12.46) can be under H 0. What does this imply about a Type 3 or directional error? That is, if the test rejects H 0 and
Show that Wn in Example 12.2.1 and Um,n in Example 12.3.7 are related by Wn = mnUm,n + n(n + 1)/2, at least in the case of no ties in the data.
Verify (12.45).
Show that (12.44) holds if Uˆn is replaced by Un.
Show (12.43)
Show (12.42).
In Example 12.3.6, show (12.37). Verify the limiting distribution of Vn in (12.38).
Verify (12.35).
Suppose (X1, Y1), . . . , (Xn, Yn) are i.i.d. P, with E(X2 i ) < ∞ and E(Y 2 i ) < ∞. The parameter of interest is θ(P) = Cov(Xi, Yi). Find a kernel for which the corresponding U-statistic Un is an unbiased estimator of θ(P). Under an appropriate moment assumption, find the limiting
Verify (12.27).
Prove a Glivenko–Cantelli Theorem (Theorem 11.4.2) for sampling without replacement from a finite population. Specifically, assume X1,..., Xn are sampled at random without replacement from the population with N = Nn elements given by {xN,1,..., xN,N }. Let Fˆn(t) = n−1 n i=1 I{Xi ≤ t} and
Prove an analogous result to Theorem 12.2.5 when sampling from an infinite population, where the asymptotic variance has the same form as (12.21)with f = 0. Assuming s2(1) and s2(0) are known, how would you allocate treatment among the N0 units to minimize the asymptotic variance? (The solution is
Provide the details to show (12.22). Hint: Use Theorem 12.2.4 and Problem 12.12.
Consider the estimatorsˆ2 N (1) defined in (12.20). Show thatsˆ2 N (1) P→s2(1). State your assumptions.
The limiting expression for NV ar(θˆN ) is given in (12.19). Find an exact expression for NV ar(ˆθN ) that has a similar representation.
In the setting of Corollary 12.2.1(ii), find an exact formula for Cov(U¯n, V¯n) and then calculate the limit of nCov(U¯n, V¯n).
Complete the proof of Corollary 12.2.1(ii) using the Cramér-Wold Device.
In the setting of Section 12.2, assume N = m + n and(xN,1,..., xN,N ) = (y1,..., ym,z1,...,zn) .Let y¯m = m i=1 yi /m and z¯n = n j=1 zj /n. Let x¯N = N j=1 xN,j /N. Also lets2 m,y = m i=1(yi − ¯ym)2/m and similarly define s2 n,z. Let Y1,..., Ym denote a sample obtained without replacement
In Example 12.2.1, rather than considering the sum of the ranks of the Yis, consider the statistic given by the sum of the squared ranks of the Yis. Find its limiting distribution, properly normalized, under F = G.
In the context of Example 12.2.1, find the limiting distribution of the Wn using Theorem 12.2.3. Identify Gn and G.
Show that τN defined in the proof of Theorem 12.2.3 satisfies τN →∞ as min(n, N − n) → ∞.
Show why Theorem 12.2.1 is a special case of Theorem 12.2.2.
Use Theorem 12.2.1 to prove an asymptotic normal approximation to the hypergeometric distribution.
Show (12.6) and (12.7).
Show (12.2) and (12.3).
(i) Suppose Xn d→ X and V ar(Xn) → V ar(X) < ∞. Show E(Xn) → E(X).(ii) Suppose (Xn, Yn) d→ (X, Y ) in the plane, with V ar(Xn) → V ar(X) < ∞ and V ar(Yn) → V ar(Y ) < ∞. Show that Cov(Xn, Yn) → Cov(X, Y ).
Assume X1,..., Xn are i.i.d. with E(|Xi|p] < ∞. Then, show that n− 1 p max 1≤i≤n|Xi| P→ 0 .
If Xn d→ X and {Xn} is asymptotically uniformly integrable, show that for any 0 < p < 1, E(X p n ) → E(X p)
(i) Show that {Xn} is uniformly integrable if and only if supn E|Xn| < ∞ and sup nE[|Xn|IA} = A|Xn(ω)|d P(ω) → 0 as P{A} → 0.(ii) Suppose X1,..., Xn are i.i.d. with finite mean μ. Show that X¯ n is uniformly integrable and hence E|X¯ n − μ| → 0. (The fact that X¯ n is uniformly
If Xn P→ 0 and sup nE[|Xn|1+δ] < ∞ for some δ > 0 , (11.42)then show E[|Xn|] → 0. (More generally, if the Xn are uniformly integrable in the sense supn E[|Xn|I{|Xn| > t}] → 0 as t → ∞, then E[|Xn|] → 0. A converse is given in Dudley (1989), p. 279.)
(i) Show that if {Xn} is uniformly integrable, then {Xn} is asymptotically uniformly integrable, but the converse is false.(ii) Show that a sufficient condition for {Xn} to be uniformly integrable is, for someδ > 0, supn E(|Xn|1+δ) < ∞.
(i) Suppose random variables Xn, Yn and a random vector Wn are such that, given Wn, Xn and Yn are conditionally independent. Assume, for nonnegative constants σX and σY , and for all z, P{Xn ≤ z|Wn} P→ (z/σX }and P{Yn ≤ z|Wn} P→ (z/σY }Show that P{Xn + Yn ≤ z|Wn} P → (z/σ2 X + σ2
Show that Xn → X in probability is equivalent to the statement that, for any subsequence Xn j , there exists a further subsequence Xn j k such that Xn j k → X with probability one.
(i) If X1,..., Xn are i.i.d. with c.d.f. F and empirical distribution Fˆn, use Theorem 11.4.3 to show that n1/2 sup |Fˆn(t) − F(t)| is a tight sequence.(ii) Let Fn be any sequence of distributions, and let Fˆn be the empirical distribution based on a sample of size n from Fn. Show that n1/2
Show how Theorem 11.4.3 implies Theorem 11.4.2. Hint: Use the Borel–Cantelli Lemma; see Billingsley (1995, Theorem 4.3).
Consider the uniform confidence band Rn,1−α for F given by(11.36). Let F be the set of all distributions on IR. Show, inf F∈F PF {F ∈ Rn,1−α} ≥ 1 − α .
Let U1,..., Un be i.i.d. with c.d.f. G(u) = u and let Gˆ n denote the empirical c.d.f. of U1,..., Un. Define Bn(u) = n1/2[Gˆ n(u) − u] .(Note that Bn(·) is a random function, called the uniform empirical process).(i) Show that the distribution of the Kolmogorov–Smirnov test statistic n1/2dK
Assume Xn has c.d.f. Fn. Fix α ∈ (0, 1).(i) If Xn is tight, show that F−1 n (1 − α) is uniformly bounded.(ii) If Xn P→c, show that F−1 n (1 − α) → c.
For a c.d.f. F, define the quantile transformation Q by Q(u) = inf{t : F(t) ≥ u} .(i) Show the event {F(t) ≥ u} is the same as {Q(u) ≤ t}.(ii) If U is uniformly distributed on (0, 1), show the distribution of Q(U) is F.
Suppose Xn is a sequence of real-valued random variables.(i) Assume Xn is Cauchy in probability; that is, for all > 0, lim min(m,n)→∞ P{|Xn − Xm| > } → 0 .Then, show there exists a random variable X such that Xn P→ X, in which case we may write X = limn→∞ Xn.(ii) Assume Xn
Suppose Xn is a tight sequence and Yn P→ 0. Show that XnYn P→ 0.If it is assumed Yn → 0 almost surely, can you conclude XnYn → 0 almost surely?
Let X1,..., Xn be i.i.d. P on S. Suppose S is countable and let E be the collection of all subsets of S. Let Pˆn be the empirical measure; that is, for any subset E of E, Pˆn(E) is the proportion of observations Xi that fall in E. Prove, with probability one, sup E∈E|Pˆn(E) − P(E)| → 0 .
Prove the Glivenko–Cantelli Theorem. Hint: Use the Strong Law of Large Numbers and the monotonicity of F.
Suppose X1,..., XI are independent and binomially distributed, with Xi ∼ b(ni, pi); that is, Xi is the number of successes in ni Bernoulli trials.Suppose that pi satisfies log[pi/(1 − pi)] = θdi for known constants di , which implies pi = edi θ1 + edi θ(Think of di as the dose given to ni
Assume X1,..., Xn are i.i.d. N(0, σ2). Let σˆ 2 n be the maximum likelihood estimator of σ2 given by σˆ 2 n = n i=1 X2 i /n.(i) Find the limiting distribution of √n(σˆ n − σ).(ii) For a constantc, let Tn,c = c ni=1 |Xi|/n. For what constant c is Tn,c a consistent estimator of σ?(iii)
Let Xi,j , 1 ≤ i ≤ I, 1 ≤ j ≤ n be independent with Xi,j Poisson with mean λi . The problem is to test the null hypothesis that the λi are all the same versus they are not all the same. Consider the test that rejects the null hypothesis iff T ≡ n Ii=1(X¯i − X¯)2 X¯is large, where
Let X1,..., Xn be a random sample from the Poisson distribution with unknown mean λ. The uniformly minimum variance unbiased estimator(UMVUE) of exp(−λ) is known to be [(n − 1)/n]Tn , where Tn = n i=1 Xi . Find the asymptotic distribution of the UMVUE (appropriately normalized). Hint: It may
Let X1, ··· , Xn be i.i.d. Poisson with mean λ. Consider estimating g(λ) = e−λ by the estimator Tn = e−X¯ n . Find an approximation to the bias of Tn;specifically, find a function b(λ) satisfying Eλ(Tn) = g(λ) + n−1 b(λ) + O(n−2)as n → ∞. Such an expression suggests a new
to suggest a test.
Suppose Xi,j are independently distributed as N(μi, σ2 i ); i =1,...,s; j = 1,..., ni . Let S2 n,i =j(Xi,j − X¯i)2, where X¯i = n−1 ij Xi,j . Let Zn,i = log[S2 n,i /(ni − 1)]. Show that, as ni → ∞,ni − 1[Zn,i − log(σ2 i )] d→ N(0, 2) .Thus, for large ni , the problem of
Suppose (X1,..., Xk ) is multinomial based on n trials and cell probabilities (p1,..., pk ). Show that√n⎡⎣k j=1 X j n log X j n− c⎤⎦converges in distribution to F, for some constant c and distribution F. Identify c and F.
(i) If X1,..., Xn is a sample from a Poisson distribution with mean E(Xi) = λ, then √n(√X¯ − √λ) tends in law to N(0, 1 4 ) as n → ∞.(ii) If X has the binomial distribution b(p, n), then √n[arcsin √X/n − arcsin √p]tends in law to N(0, 1 4 ) as n → ∞.Note. Certain
Consider the setting of Problem 6.21, where (Xi, Yi) are independent N(μi, σ2) for i = 1,..., n. The parameters μ1,..., μn and σ2 are all unknown.For testing σ = 1 against σ > 1, determine the limiting power of the UMPI level-αtest against alternatives 1 + hn−1/2.
Assume (Ui, Vi) is bivariate normal with correlation ρ. Let ρˆn denote the sample correlation given by (11.29). Verify the limit result (11.31).
to prove (11.28).
Use
Suppose R is a real-valued function on IRk with R(y) = o(|y|p) as|y| → 0, for some p > 0. If Yn is a sequence of random vectors satisfying |Yn| =oP (1), then show R(Yn) = oP (|Yn|p). Hint: Let g(y) = R(y)/|y|p with g(0) = 0 so that g is continuous at 0; apply the Continuous Mapping Theorem.
Prove part (ii) of Theorem 11.3.4.
Let X1, ··· , Xn be i.i.d. normal with mean θ and variance 1. Suppose ˆθn is a location equivariant sequence of estimators such that, for every fixedθ, n1/2(θˆn − θ) converges in distribution to the standard normal distribution (if θ is true). Let X¯ n be the usual sample mean. Show
Suppose Xn d→ N(μ, σ2). (i). Show that, for any sequence of numbers cn, P(Xn = cn) → 0. (ii). If cn is any sequence such that P(Xn > cn) → α, then cn → μ + σz1−α, where z1−α is the 1 − α-quantile of N(0, 1).
Suppose Pn is a sequence of probabilities and Xn is a sequence of real-valued random variables; the distribution of Xn under Pn is denoted L(Xn|Pn).Prove that L(Xn|Pn) is tight if and only if Xn/an → 0 in Pn-probability for every sequence an ↑ ∞.
Show that tightness of a sequence of random vectors in IRk is equivalent to each of the component variables being tight.
Prove Lemma 11.3.1
Show how the interval (11.25) is obtained from (11.24).
In Example 11.3.4, let In be the interval (11.23). Show that, for any n, inf p Pp{p ∈ ˆIn} = 0 .Hint: Consider p positive but small enough so that the chance that a sample of size n results in 0 successes is nearly 1.
In Example 11.3.2, show that βn(pn) → 1 if n1/2(pn − 1/2) → ∞and βn(pn) → α if n1/2(pn − 1/2) → 0.
(i) Prove Corollary 11.3.1.(ii) Suppose Xn d→ X and Cn P→ ∞. Show P{Xn ≤ Cn} → 1.
If Xn is a sequence of real-valued random variables, prove that Xn → 0 in Pn-probability if and only if EPn [min(|Xn|, 1)] → 0.
As in Example 11.3.1, consider the problem of testing P = P0 versus P = P1 based on n i.i.d. observations. The problem is an alternative way to show that a most powerful level α (0 < α < 1) test sequence has limiting power one. If P0 and P1 are distinct, there exists E such that P0(E) = P1(E).
(i) Let K(P0, P1) be the Kullback–Leibler Information, defined in(11.21). Show that K(P0, P1) ≥ 0 with equality iff P0 = P1.(ii) Show the convergence (11.20) holds even when K(P0, P1) = ∞. Hint: Use Problem 11.36.
Suppose X1,..., Xn are i.i.d. real-valued random variables. Write Xi = X+i − X−i , where X+i = max(Xi, 0). Suppose X−i has a finite mean, but X+i does not. Let X¯ n be the sample mean. Show X¯ n P→ ∞. Hint: For B > 0, let Yi = Xi if Xi ≤ B and Yi = B otherwise; apply the Weak Law to
Generalize Slutsky’s Theorem (Theorem 11.3.2) to the case where Xn is a vector, An is a matrix, and Bn is a vector.
Assume Xn d→ X and Yn P→c, where c is a constant. Show that(Xn, Yn) d→ (X, c).
Suppose Xn is a sequence of random vectors.(i) Show Xn P→ 0 if and only if |Xn| P→ 0 (where the first zero refers to the zero vector and the second to the real number zero).(ii) Show that convergence in probability of Xn to X is equivalent to convergence in probability of their components to
Suppose Xn and X are real-valued random variables (defined on a common probability space). Prove that, if Xn converges to X in probability, then Xn converges in distribution to X. Show by counterexample that the converse is false.However, show that if X is a constant with probability one, then Xn
is a generalization of Lemma 11.2.1.
if {Fˆn} is a random sequence, similar to how
Prove a result analogous to
Prove the following generalization of Lemma 11.2.1. Suppose {Fˆn}is a sequence of random distribution functions satisfying Fˆn(x) P→ F(x) at all x which are continuity points of a fixed distribution function F. Assume F is continuous and strictly increasing at F−1(1 − α). Then, Fˆ −1 n
Give an example of an i.i.d. sequence of real-valued random variables such that the sample mean converges in probability to a finite constant, yet the mean of the sequence does not exist.
(Chebyshev’s Inequality) (i) Show that, for any real-valued random variable X and any constants a > 0 and c, E(X − c)2 ≥ a2P{|X − c| ≥ a} .(ii) Hence, if Xn is any sequence of random variables and c is a constant such that E(Xn − c)2 → 0, then Xn → c in probability. Give a
(Markov’s Inequality) Let X be a real-valued random variable with X ≥ 0. Show that, for any t > 0, P{X ≥ t} ≤E[X I{X ≥ t}]t≤E(X)t;here I(X ≥ t) is the indicator variable that is 1 if X ≥ t and is 0 otherwise.
(i) Construct a sequence of distribution functions {Fn} on the real line such that Fn d→ F, but the convergence F−1 n (1 − α) → F−1(1 − α) fails, even if F is assumed continuous. (ii) On the other hand, if F is assumed continuous (but not necessarily strictly increasing), show that
For a c.d.f. F with quantile function defined by F−1(u) = inf{x : F(x) ≥ u} , show that: (i) F(x) ≥ u is equivalent to F−1(u) ≤ x.(ii) F−1(·) is nondecreasing and left continuous with right-hand limits.(iii) F(F−1(u)) ≤ u with equality if F is continuous at F−1(u).
Suppose F and G are two probability distributions on IRk . Let L be the set of (measurable) functions f from IRk to IR satisfying | f (x) − f (y)|≤|x − y|and supx | f (x)| ≤ 1, where |·| is the usual Euclidean norm. Define the BoundedLipschitz Metric as λ(F, G) = sup{|EF f (X) − EG f
Let Fn and F be c.d.f.s on IR. Show that weak convergence of Fn to F is equivalent to ρL (Fn, F) → 0, where ρL is the Lévy metric.
For cumulative distribution functions F and G on the real line, define the Kolmogorov–Smirnov distance between F and G to be dK (F, G) = sup x|F(x) − G(x)| .Show that dK (F, G) defines a metric on the space of distribution functions; that is, show dK (F, G) = dK (G, F), dK (F, G) = 0 implies F
Show that ρL (F, G) defined in Definition 11.2.3 is a metric; that is, show ρL (F, G) = ρL (G, F), ρL (F, G) = 0 if and only if F = G, andρL (F, G) ≤ ρL (F, H) + ρL (H, G) .
Showing 200 - 300
of 5757
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last
Step by Step Answers