New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
statistical techniques in business
Testing Statistical Hypotheses Volume I 4th Edition E.L. Lehmann, Joseph P. Romano - Solutions
Let X1,..., Xm; Y1,..., Yn be independently, normally distributed with means ξ and η, and variances a σ2 and τ 2 respectively, and consider the hypothesis H : τ ≤ σ a against K : σ < τ .(i) Ifξ and η are known, there exists a UMP test given by the rejection region(Yj − η)2/(Xi −
Let X1,..., Xm and Y1,..., Yn be independent samples from N(ξ, 1)and N(η, 1), and consider the hypothesis H : η ≤ ξ against K : η > ξ. There exists a UMP test, and it rejects the hypothesis when Y¯ − X¯ is too large.[If ξ1 < η1 is a particular alternative, the distribution assigning
Sufficient statistics with nuisance parameters.(i) A statistic T is said to be partially sufficient for θ in the presence of a nuisance parameter η if the parameter space is the direct product of the set of possibleθ- and η-values, and if the following two conditions hold: (a) the conditional
Let X and Y be the number of successes in two sets of n binomial trials with probabilities p1 and p2 of success.(i) The most powerful test of the hypothesis H : p2 ≤ p1 against an alternative(p 1, p 2) with p 1 < p 2 and p 1 + p 2 = 1 at level α < 1 2 rejects when Y − X > C and with
A counterexample. Typically, as α varies, the most powerful level αtests for testing a hypothesis H against a simple alternative are nested in the sense that the associated rejection regions, say Rα, satisfy Rα ⊆ Rα , for any α < α. The following example shows that this need not be
Confidence bounds for a median. Let X1,..., Xn be a sample from a continuous cumulative distribution functions F. Let ξ be the unique median of F if it exists, or more generally let ξ = inf{ξ : F(ξ) ≥ 1 2 }.(i) If the ordered X’s are X(1) < ··· < X(n), a uniformly most accurate lower
Let the variables Xi(i = 1,...,s) be independently distributed with Poisson distribution P(λi). For testing the hypothesis H :λj ≤ a (for example, that the combined radioactivity of a number of pieces of radioactive material does not exceed a), there exists a UMP test, which rejects when X j
Let f , g be two probability densities with respect to μ. For testing the hypothesis H : θ ≤ θ0 or θ ≥ θ1(0 < θ0 < θ1 < 1) against the alternatives θ0
For testing the hypothesis H : θ1 ≤ θ ≤ θ2(θ1 ≤ θ2) against the alternatives θ < θ1 or θ > θ2, or the hypothesis θ = θ0 against the alternativesθ = θ0, in an exponential family or more generally in a family of distributions satisfying the assumptions of Problem 3.58, a UMP test
Extension of Theorem 3.7.1. The conclusions of Theorem 3.7.1 remain valid if the density of a sufficient statistic T (which without loss of generality will be taken to be X), say pθ(x), is STP3 and is continuous in x for each θ.[The two properties of exponential families that are used in the
STP3. Let θ and x be real-valued, and suppose that the probability densities pθ(x) are such that pθ (x)/pθ(x) is strictly increasing in x for θ < θ. Then the following two conditions are equivalent: (a) For θ1 < θ2 < θ3 and k1, k2, k3 > 0, let g(x) = k1 pθ1 (x) − k2 pθ2 (x) + k3 pθ3
Exponential families. The exponential family (3.19) with T (x) = x and Q(θ) = θ is STP∞, with the natural parameter space and X = (−∞,∞).[That the determinant |eθi x j |,i, j = 1,..., n, is positive can be proved by induction. Divide the ith column by eθ1 xi,i = 1,..., n; subtract in
Totally positive families. A family of distributions with probability densities pθ(x), θ and x real-valued and varying over and X , respectively, is said to be totally positive of order r(TPr) if for all x1 < ··· < xn and θ1 < ··· < θn n =pθ1 (x1) ··· pθ1 (xn)pθn (x1) ···
For a random variable X with binomial distribution b(p, n), determine the constants Ci, γ (i = 1, 2) in the UMP test (3.33) for testing H : p ≤ 0.2 or≤ 0.7 when α = 0.1 and n = 15. Find the power of the test against the alternative p = 0.4.
Let F1,..., Fm+1 be real-valued functions defined over a space U.A sufficient condition for u0 to maximize Fm+1 subject to Fi(u) ≤ ci(i = 1,..., m)is that it satisfies these side conditions, that it maximizes Fm+1(u) − ki Fi(u) for some constants ki ≥ 0, and that Fi(uo) = ci for those values
The following example shows that Corollary 3.6.1 does not extend to a countably infinite family of distributions. Let pn be the uniform probability density on [0, 1 + 1/n], and p0 the uniform density on (0, 1).(i) Then p0 is linearly independent of (p1, p2, . . .), that is, there do not exist
Optimum selection procedures. On each member of a population n measurements (X1,..., Xn) = X are taken, for example the scores of n aptitude tests which are administered to judge the qualifications of candidates for a certain training program. A future measurement Y such as the score in a final
If β(θ) denotes the power function of the UMP test of Corollary 3.4.1, and if the function Q of (3.19) is differentiable, then β(θ) > 0 for all θ for which Q(θ) > 0.[To show that β(θ0) > 0, consider the problem of maximizing, subject to Eθ0φ(X) = α, the derivative β(θ0) or equivalently
that Eθ[L(θ, θ)] = Pθ{θ∗ ≤ θ}L(θ, u)d F(u)≤ Pθ{θ∗ ≤ θ}L(θ, u)d F∗(u) = Eθ[L(θ, θ∗)].]
Confidence bounds with minimum risk. Let L(θ, θ) be nonnegative and nonincreasing in its second argument for θ < θ, and equal to 0 for θ ≥ θ. If θand θ∗ are two lower confidence bounds for θ such that P0{θ ≤ θ} ≤ Pθ{θ∗ ≤ θ} for all θ ≤ θ,then EθL(θ, θ) ≤ EθL(θ,
(i) Suppose U1,..., Un are i.i.d. U(0, 1) and let U(k) denote the kth largest value (or kth order statistic). Find the density of U(k) and show that P{U(k) ≤ p} = p 0n!(k − 1)!(n − k)!uk−1(1 − u)n−k du , which in turn is equal to nj=k nj p j(1 − p)n− j .(ii) Use (i) to show that, in
(i) For n = 5, 10 and 1 − α = 0.95, graph the upper confidence limits p¯ and p¯∗ of Example 3.5.2 as functions of t = x + u.(ii) For the same values of n and α1 = α2 = 0.05, graph the lower and upper confidence limits p and p¯.
In Example 3.5.2, what is an explicit formula for the uniformly most accurate upper bound at level 1 − α when X = 0 and U = u? Compare it to the Clopper-Pearson bound in the same situation.
Typically, lower confidence bounds θ(X) satisfying (3.21) also satisfy Pθ{θ(X) < θ} ≥ 1 − α for all θso that θ is strictly greater than θ(X) with probability ≥ 1 − α. A similar issue of course also applies to upper confidence bounds. Investigate conditions where one can claim the
Let f (x)/[1 − F(x)] be the “mortality” of a subject at time x given that it has survived to this time. A c.d.f. F is said to be smaller than G in the hazard ordering if g(x)1 − G(x) ≤ f (x)1 − F(x)for all x . (3.48)(i) Show that (3.48) is equivalent to 1 − F(x)1 − G(x) is
Let F and G be two continuous, strictly increasing c.d.f.s, and let k(u) = G[F−1(u)], 0 < u < 1.(i) Show F and G are stochastically ordered, say F(x) ≤ G(x) for all x, if and only if k(u) ≤ u for all 0 < u < 1.(ii) If F and G have densities f and g, then show they are monotone likelihood
Extension of Lemma 3.4.2. Let P0 and P1 be two distributions with densities p0, p1 such that p1(x)/p0(x) is a nondecreasing function of a real-valued statistic T (x).(i) If T = T (X) has probability density p i when the original distribution of X is Pi , then p 1(t)/p 0(t) is nondecreasing in
Let X1, ··· , Xn be a sample from a location family with common density f (x − θ), where the location parameter θ ∈ R and f (·) is known. Consider testing the null hypothesis that θ = θ0 versus an alternative θ = θ1 for some θ1 > θ0.Suppose there exists a most powerful levelαtest
Let X1,..., Xn be a sample from the inverse Gaussian distribution I(μ, τ ) with densityτ2πx 3 exp− τ2xμ2 (x − μ)2, x > 0, τ , μ > 0.Show that there exists a UMP test for testing(i) H : μ ≤ μ0 against μ > μ0 when τ is known;(ii) H : τ ≤ τ0 against τ > τ0 when μ is known.
Consider a single observation X from W(1, c).(i) The family of distributions does not have a monotone likelihood ratio in x.(ii) The most powerful test of H : c = 1 against c = 2 rejects when X < k1 and when X > k2. Show how to determine k1 and k2.(iii) Generalize (ii) to arbitrary alternatives c1
A random variable X has the Weibull distribution W(b,c) if its density is c b x bc−1 e−(x/b)c, x > 0,b, c > 0.Show that this defines a probability density. If X1,..., Xn is a sample from W(b, c), with the shape parameter c known, show that there exists a UMP test of H : b ≤ b0 against b >
Let X1,..., Xn be a sample from the gamma distribution (g, b)with density 1(g)bg x g−1 e−x/b, 0 < x, 0 b0 when g is known;(ii) H : g ≤ g0 against g > g0 when b is known.In each case give the form of the rejection region.
Suppose a time series X0, X1, X2,... evolves in the following way.The process starts at 0, so X0 = 0. For any i ≥ 1, conditional on X0,..., Xi−1, Xi =ρXi−1 + i , where the i are i.i.d. standard normal. You observe X0, X1, X2,..., Xn.For testing the null hypothesis ρ = 0 versus a fixed
Let Xi be independently distributed as N(i, 1), i = 1,..., n. Show that there exists a UMP test of H : ≤ 0 against K : > 0, and determine it as explicitly as possible.
Let X be a single observation from the Cauchy density given at the end of Section 3.4.(i) Show that no UMP test exists for testing θ = 0 against θ > 0.(ii) Determine the totality of different shapes the MP level-α rejection region for testing θ = θ0 against θ = θ1 can take on for varying α
Let X = (X1,..., Xn) be a sample from the uniform distribution U(θ, θ + 1).(i) For testing H : θ ≤ θ0 against K : θ > θ0 at level α, there exists a UMP test which rejects when min(X1,..., Xn) > θ0 + C(α) or max(X1,..., Xn) >θ0 + 1 for suitable C(α).(ii) The family U(θ, θ + 1) does
When a Poisson process with rate λ is observed for a time interval of length τ , the number X of events occurring has the Poisson distribution P(λτ ).Under an alternative scheme, the process is observed until r events have occurred, and the time T of observation is then a random variable such
the distribution of [r i=1 Yi + (n − r)Yr]/θ was found to be χ2 with 2r degrees of freedom.]
Let X1,..., Xn be independently distributed with density (2θ)−1 e−x/2θ, x ≥ 0, and let Y1 ≤···≤ Yn be the ordered X’s. Assume that Y1 becomes available first, then Y2, and so on, and that observation is continued until Yr has been observed. On the basis of Y1,..., Yr it is desired
Let the probability density pθ of X have monotone likelihood ratio in T (x), and consider the problem of testing H : θ ≤ θ0 against θ > θ0. If the distribution of T is continuous, the p-value pˆ of the UMP test is given by pˆ = Pθ0 {T ≥ t}, where t is the observed value of T . This
(i) A necessary and sufficient condition for densities pθ(x) to have monotone likelihood ratio in x, if the mixed second derivative ∂2 log pθ(x)/∂θ ∂x exists, is that this derivative is ≥ 0 for all θ and x.(ii) An equivalent condition is that pθ(x)∂2 pθ(x)∂θ ∂x ≥∂
Let X be the number of successes in n independent trials with probability p of success, and let φ(x) be the UMP test (3.16) for testing p ≤ p0 against p > p0 at the level of significance α.(i) For n = 6, p0 = 0.25 and the levels α = 0.05, 0.1, 0.2 determine C and γ, and the power of the test
(i) If pˆ is uniform on (0, 1), show that −2 log(pˆ) has the Chi-squared distribution with 2 degrees of freedom.(ii) Suppose pˆ1,..., pˆs are i.i.d. uniform on (0, 1). Let F = −2 log(pˆ1 ··· ˆps). Argue that F has the Chi-squared distribution with 2s degrees of freedom. What can you
Under the setup of Lemma 3.3.1, show that there exists a real-valued statistic T (X) so that the rejection region is necessarily of the form (3.47). [Hint:Let T (X) =−ˆp.]
Under the setup of Lemma 3.3.1, suppose the rejection regions are defined by Rα = {X : T (X) ≥ k(α)} (3.47)for some real-valued statistic T (X) and k(α) satisfying supθ∈H Pθ{T (X) ≥ k(α)} = α .Then, show pˆ = supθ∈H P{T (X) ≥ t} , where t is the observed value of T (X).
(i) Show that if Y is any random variable with c.d.f. G(·), then P{G(Y ) ≤ u} ≤ u for all 0 ≤ u ≤ 1 .If G−(t) = P{Y < t}, then show P{1 − G−(Y ) ≤ u} ≤ u for all 0 ≤ u ≤ 1 .(ii) In Example 3.3.3, show that Fθ0 (T ) and 1 − F−θ0(T ) are both valid p-values, in the sense
In Example 3.21, show that p-value is indeed given by pˆ = ˆp(X) =(11 − X)/10. Also, graph the c.d.f. of pˆ under H and show that the last inequality in (3.15) is an equality if and only if u is of the form 0,..., 10.
is admissible.
Let fθ, θ ∈ , denote a family of densities with respect to a measureμ. (We assume is endowed with a σ-field so that the densities fθ(x) are jointly measurable in θ and x.) Consider the problem of testing a simple null hypothesisθ = θ0 against the composite alternatives K = {θ : θ =
Suppose X1,..., Xn are i.i.d. N(ξ, σ2) with σ known. For testingξ = 0 versus ξ = 0, the average power of a test φ = φ(X1,..., Xn) is given by∞−∞Eξ (φ)d(ξ) , where is a probability distribution on the real line. Suppose that is symmetric about 0; that is, {E} = {−E} for all
Under the setup of Theorem 3.2.1, show that there always exist MP tests that are nested in the sense of Problem 3.17(iii).
A counterexample. Typically, as α varies the most powerful level αtests for testing a hypothesis H against a simple alternative are nested in the sense that the associated rejection regions, say Rα, satisfy Rα ⊆ Rα , for any α < α. Even if the most powerful tests are nonrandomized, this
Based on X with distribution indexed by θ ∈ , the problem is to test θ ∈ ω versus θ ∈ ω. Suppose there exists a test φ such that Eθ[φ(X)] ≤ β for all θ in ω, where β < α. Show there exists a level α test φ∗(X) such that Eθ[φ(X)] ≤ Eθ[φ∗(X)] , for all θ in ω and
it is sufficient for P.]
Fully informative statistics. A statistic T is fully informative if for every decision problem the decision procedures based only on T form an essentially complete class. If P is dominated and T is fully informative, then T is sufficient.[Consider any pair of distributions P0, P1 ∈ P with
If the sample space X is Euclidean and P0, P1 have densities with respect to Lebesgue measure, there exists a nonrandomized most powerful test for testing P0 against P1 at every significance level α.13 [This is a consequence of Theorem 3.2.1 and the following lemma.14 Let f ≥ 0 and A f (x) dx
The following example shows that the power of a test can sometimes be increased by selecting a random rather than a fixed sample size even when the randomization does not depend on the observations. Let X1,..., Xn be independently distributed as N(θ, 1), and consider the problem of testing H : θ
Let X1,..., Xn be independently distributed, each uniformly over the integers 1, 2,..., θ. Determine whether there exists a UMP test for testing H :θ = θ0, at level 1/θn 0 against the alternatives (i) θ > θ0; (ii) θ < θ0; (iii) θ = θ0.
(i) For testing H0 : θ = 0 against H1 : θ = θ1 when X is N(θ, 1), given any 0 < α < 1 and any 0 < π < 1 (in the notation of the preceding problem), there exists θ1 and x such that (a) H0 is rejected when X = x but (b)P(H0 | x) is arbitrarily close to 1.(ii) The paradox of part (i) is due to
In the notation of Section 3.2, consider the problem of testing H0 :P = P0 against H1 : P = P1, and suppose that known probabilities π0 = π andπ1 = 1 − π can be assigned to H0 and H1 prior to the experiment.(i) The overall probability of an error resulting from the use of a test ϕ is
Let X be distributed according to Pθ, θ ∈ , and let T be sufficient for θ. If ϕ(X) is any test of a hypothesis concerning θ, then ψ(T ) given by ψ(t) =E[ϕ(X) | t] is a test depending on T only, and its power function is identical with that of ϕ(X).
to obtain UMP tests of (a) H : τ = τ0 against τ = τ0 when b is known; (b) H : c = c0, τ = τagainst c > c0, τ < τ0.
A random variable X has the Pareto distribution P(c, τ ) if its density is cτ c/xc+1, 0 < τ < x, 0 < C.(i) Show that this defines a probability density.(ii) If X has distribution P(c, τ ), then Y = log X has exponential distribution E(ξ,b) with ξ = log τ , b = 1/c.(iii) If X1,..., Xn is a
Let the distribution of X be given by x 01 2 3 Pθ(X = x) θ 2θ 0.9 − 2θ 0.1 − θwhere 0 < θ < 0.1. For testing H : θ = 0.05 against θ > 0.05 at level α = 0.05, determine which of the following tests (if any) is UMP:(i) φ(0) = 1, φ(1) = φ(2) = φ(3) = 0;(ii) φ(1) = 0.5, φ(0) = φ(2)
Let P0, P1, P2 be the probability distributions assigning to the integers 1,..., 6 the following probabilities:123456 P0 0.03 0.02 0.02 0.01 0 0.92 P1 0.06 0.05 0.08 0.02 0.01 0.78 P2 0.09 0.05 0.12 0 0.02 0.72 Determine whether there exists a level-α test of H : P = P0 which is UMP against the
In the proof of Theorem 3.2.1(i), consider the set of c satisfyingα(c) ≤ α ≤ α(c − 0). If there is only one suchc, c is unique; otherwise, there is an interval of such values [c1, c2]. Argue that, in this case, if α(c) is continuous at c2, then Pi(C) = 0 for i = 0, 1, where= x : p0(x) >
UMP test for exponential densities. Let X1,..., Xn be a sample from the exponential distribution E(a,b) of Problem1.18, and let X(1) = min(X1,..., Xn).(i) Determine the UMP test for testing H : a = a0 against K : a = a0 when b is assumed known.(ii) The power of any MP level-α test of H : a = a0
UMP test for U(0, θ). Let X = (X1,..., Xn) be a sample from the uniform distribution on (0, θ).(i) For testing H : θ ≤ θ0 against K : θ > θ0 any test is UMP at level α for which Eθ0φ(X) = α, Eθφ(X) ≤ α for θ ≤ θ0, and φ(x) = 1 when max(x1,..., xn) >θ0.(ii) For testing H : θ
Let be the natural parameter space of the exponential family (2.35), and for any fixed tr+1,..., tk (r < k) let θ1...θr be the natural parameter space of the family of conditional distributions given Tr+1 = tr+1,..., Tk = tk .(i) Then θ1,...,θr contains the projection θ1,...,θr of onto
For any θ which is an interior point of the natural parameter space, the expectations and covariances of the statistics Tj in the exponential family (2.35)are given by E Tj(X)= −∂ logC(θ)∂θj(j = 1,..., k), E Ti(X)Tj(X)− E Ti(X)E Tj(X)= −∂2 logC(θ)∂θi∂θj(i, j = 1,..., k).
Life testing. Let X1,..., Xn be independently distributed with exponential density (2θ)−1e−x/2θ for x ≥ 0, and let the ordered X’s be denoted by Y1 ≤ Y2 ≤···≤ Yn. It is assumed that Y1 becomes available first, then Y2, and so on, and that observation is continued until Yr has
Let Xi (i = 1,...,s) be independently distributed with Poisson distribution P(λi), and let T0 = X j , Ti = Xi , λ = λj . Then T0 has the Poisson distribution P(λ), and the conditional distribution of T1,..., Ts−1 given T0 = t0 is the multinomial distribution (2.34) with n = t0 and pi = λi/λ.
For a decision problem with a finite number of decisions, the class of procedures depending on a sufficient statistic T only is essentially complete. [For Euclidean sample spaces this follows from Theorem 2.5.1 without any restriction on the decision space. For the present case, let a decision
If a statistic T is sufficient for P, then for every function f which is (A, Pθ)-integrable for all θ ∈ there exists a determination of the conditional expectation function Eθ[ f (X) | t] that is independent of θ. [If X is Euclidean, this follows from Theorems 2.5.2 and 2.6.1. In general, if
that d P0 dλ = d P0 d n j=0 c j Pj d n j=0 c j Pj dλis also A0-measurable. (ii): Let λ = ∞j=1 c j Pθ j be equivalent to P. Then pairwise sufficiency of T implies for any θ0 that d Pθ0 /(d Pθ0 + dλ) and hence d Pθ0 /dλ is a measurable function of T .]
Pairwise sufficiency. A statistic T is pairwise sufficient for P if it is sufficient for every pair of distributions in P.(i) If P is countable and T is pairwise sufficient for P, then T is sufficient for P.(ii) IfP is a dominated family and T is pairwise sufficient forP, then T is sufficient for
Sufficiency of likelihood ratios. Let P0, P1 be two distributions with densities p0, p1. Then T (x) = p1(x)/p0(x) is sufficient for P = {P0, P1}. [This follows from the factorization criterion by writing p1 = T · p0, p0 = 1 · p0.]
is sufficient.(iii) Let X1,..., Xn be identically and independently distributed according to a continuous distribution P ∈ P, and suppose that the distributions ofP are symmetric with respect to the origin. Let Vi = |Xi| and Wi = V(i). Then (W1,..., Wn)is sufficient for P.
Symmetric distributions.(i) LetP be any family of distributions of X = (X1,..., Xn) which are symmetric in the sense that P Xi1 ,..., Xin∈ A= P {(X1,..., Xn) ∈ A}for all Borel sets A and all permutations (i1,...,in) of (1,..., n). Then the statistic T of Example 2.4.1 is sufficient for P, and
Let X = Y × T , and suppose that P0, P1 are two probability distributions given by d P0(y, t) = f (y)g(t) dμ(y) dν(t), d P1(y, t) = h(y, t) dμ(y) dν(t), where h(y, t)/ f (y)g(t) < ∞. Then under P1 the probability density of Y with respect to μ is pY 1 (y) = f (y)E0 h(y, T )f (y)g(T )(((Y =
(i) LetP be any family of distributions X = (X1,..., Xn)such that P{(Xi, Xi+1,..., Xn, X1,..., Xi−1) ∈ A} = P{(X1,..., Xn) ∈ A}for all Borel sets A and all i = 1,..., n. For any sample point (x1,..., xn)define (y1,..., yn) = (xi, xi+1,..., xn, x1,..., xi−1), where xi = x(1) =min(x1,...,
Let (X , A) be a measurable space, and A0 a σ-field contained in A.Suppose that for any function T , the σ-field B is taken as the totality of sets B such that T −1(B) ∈ A. Then it is not necessarily true that there exists a function T such that T −1(B) ∈ A0. [An example is furnished by
If f (x) > 0 for all x ∈ S and μ is σ-finite, then S f dμ = 0 impliesμ(S) = 0.[Let Sn be the subset of S on which f (x) ≥ 1/n. Then μ(S) ≤ μ(Sn) andμ(Sn) ≤ nSn f dμ ≤ nS f dμ = 0.]
Radon–Nikodym derivatives.(i) If λ and μ are σ-finite measures over (X , A) and μ is absolutely continuous with respect to λ, thenf dμ =f dμdλdλfor any μ-integrable function f .(ii) If λ, μ, and ν are σ-finite measures over (X , A) such that ν is absolutely continuous with
Monotone class. A class F of subsets of a space is a field if it contains the whole space and is closed under complementation and under finite unions; a classMis monotone if the union and intersection of every increasing and decreasing sequence of sets of M is again in M. The smallest monotone
(i) Let X1,..., Xn be a sample from the uniform distributionU(0, θ), 0 < θ < ∞, and let T = max(X1,..., Xn). Show that T is sufficient, once by using the definition of sufficiency and once by using the factorization criterion and assuming the existence of statistics Yi satisfying
In n independent trials with constant probability p of success, let Xi = 1 or 0 as the ith trial is a success or not. Then n i=1 Xi is minimal sufficient.[Let T = Xi and suppose that U = f (T ) is sufficient and that f (k1) =···= f (kr) = u. Then P{T = t | U = u} depends on p.]
need not hold when G is infinite follows by comparing the best invariant estimates of (i) with the estimate δ1(x) which is X + 1 when X < 0 and X − 1 when X ≥ 0.
(i) Let X take on the values θ − 1 and θ + 1 with probability 1 2 each.The problem of estimating θ with loss function L(θ,d) = min(|θ − d|, 1) remains invariant under the transformation gX = X +c, g¯θ = θ +c, g∗d = d +c. Among invariant estimates, those taking on the values X − 1
Admissibility of invariant procedures. If a decision problem remains invariant under a finite group, and if there exists a procedure δ0 that uniformly minimizes the risk among all invariant procedures, then δ0 is admissible.[This follows from the identity R(θ, δ) = R(g¯θ, g∗δg−1) and the
Admissibility of unbiased procedures. (i) Under the assumptions of Problem 1.10, if among the unbiased procedures there exists one with uniformly minimum risk, it is admissible. (ii) That in general an unbiased procedure with uniformly minimum risk need not be admissible is seen by the following
(i) Let X1,..., Xn be a sample from N(ξ, σ2), and consider the problem of deciding between ω0 : ξ < 0 and ω1 : ξ ≥ 0. If x¯ = xi /n and C =(a1/a0)2/n, the likelihood ratio procedure takes decision d0 ord, as√nx¯ (xi − ¯x)2< k or > k,where k = √C − 1 if C > 1 and k = √(1 −
(i) Let X have probability density pθ(x) with θ one of the valuesθ1,..., θn, and consider the problem of determining the correct value of θ, so that the choice lies between the n decisions d1 = θ1,..., dn = θn with gain a(θi) if di = θi and 0 otherwise. Then the Bayes solution (which
Invariance and minimax. Let a problem remain invariant relative to the groups G, G¯ , and G∗ over the spaces X , , and D, respectively. Then a randomized procedure Yx is defined to be invariant if for all x and g the conditional distribution of Yx given x is the same as that of g∗−1Ygx .(i)
Unbiasedness and minimax. Let = 0 ∪ 1 where 0, 1 are mutually exclusive, and consider a two-decision problem with loss function L(θ, di) = ai for θ ∈ j(j = i) and L(θ, di) = 0 for θ ∈ i(i = 0, 1).(i) Any minimax procedure is unbiased. (ii) The converse of (i) holds provided Pθ(A)is a
(i) As an example in which randomization reduces the maximum risk, suppose that a coin is known to be either standard (HT) or to have heads on both sides (HH). The nature of the coin is to be decided on the basis of a single toss, the loss being 1 for an incorrect decision and 0 for a correct one.
Structure of Bayes solutions.(i) Let be an unobservable random quantity with probability density ρ(θ), and let the probability density of X be pθ(x) when = θ. Then δ is a Bayes solution of a given decision problem if for each x the decision δ(x) is chosen so as to minimize L(θ,
Unbiasedness in interval estimation. Confidence intervals I = (L, L¯)are unbiased for estimating θ with loss function L(θ, I) = (θ − L)2 + (L¯ − θ)2 provided E[ 1 2 (L + L¯)] = θ for all θ, that is, provided the midpoint of I is an unbiased estimate of θ in the sense of (1.11).
Relation of unbiasedness and invariance.(i) If δ0 is the unique (up to sets of measure 0) unbiased procedure with uniformly minimum risk, it is almost invariant.(ii) If G¯ is transitive and G∗ commutative, and if among all invariant (almost invariant) procedures there exists a procedure δ0
Let C be any class of procedures that is closed under the transformations of a group G in the sense that δ ∈ C implies g∗δg−1 ∈ C for all g ∈ G. If there exists a unique procedure δ0 that uniformly minimizes the risk within the class C, then δ0 is invariant.7 If δ0 is unique only up
Showing 700 - 800
of 5757
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last
Step by Step Answers