New Semester Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
bayesian biostatistics
Statistical Decision Theory And Bayesian Analysis 2nd Edition James O. Berger - Solutions
In Example 25, show that, for any constantb, the estimator(0) ={ 8_1(x)=x-1 (2) +1 if x≥b, is R-better than the best invariant estimators.
Assume that Xe R¹ has density 20 f(x[0) =π(ex +e-0x)where 0 0. It is desired to estimate e under a convex loss. Show that, to determine a minimax estimator, one need only consider estimators which are a function of x.
Assume that X~ B(1,0) is observed, on the basis of which it is desired to estimate e under loss L(0,a) =0-a. Find a minimax estimator of 0.
Assume that X₁,..., Xn is a sample from the U(0, 0) distribution. Find the-optimal 100(1- α)% invariant confidence rule, and show that it coincides with the 100(1- a)% HPD credible set with respect to the noninformative prior(0) = 1/0
For a one-dimensional location parameter problem, prove that the n-optimal 100(1- a)% invariant confidence rule coincides with the 100(1- a)% HPD credible set with respect to the noninformative prior π(0) =1.
Assume that X has density (on (0, ∞))2B f(x\)= "(B²+x)where ẞ>0. Find the π'-optimal 90% invariant confidence rule for ẞ.
In the situation of Example 20, find the π'-optimal 90% invariant confidence rule for o², and compare it to the 90% HPD credible set with respect to the generalized prior (2) = 1/02.
Assume that X₁,..., X, is a sample from the U(0-√30,0+√30) distribution, and that it is desired to estimate a0+ Bo under the loss L((0, a),a) = (a0+Bo-a)2/o², where a and ẞ are given constants. Find the generalized Bayes estimator with respect to the generalized prior density (0, 0) = 1/σ
In the situation of Example 18, find the best invariant estimator for the loss(a) L((0, 0),a) = (a0+Bo- a)2/ o².(b) L((0, o),a) = (1-a/σ)2.(c) L((0, o),a) =(0-a)²/σ².
Let be the group of all (pxp) nonsingular lower triangular matrices (i.e., 8,, =0 for j> i). Show that the left and right invariant Haar densities (w.r.t.П П dg) are 1 1 h'(g)= and h(g) = Π18 Π-1|84/2+1-i(a) when p = 2,(b) for general p.
Let be the group of all linear transformations of RP or, equivalently, the group of all (pxp) nonsingular matrices. Show that the left and right invariant Haar densities (w.r.t. ПП dg) are h'(g) = h'(g) =det g(a) when p =2,(b) for general p.
If is the group of nonsingular (pp) diagonal matrices, show that the left and right invariant Haar densities (w.r.t. П dg;) are 1 1 h'(g) =h'(g) =det g8
Let be the p-dimensional location (or additive) group, and identify each go with the point ce R. Show that the left and right invariant Haar densities for this group are h'(c) = h'(c) = 1.The following three problems deal with groups, , of matrix transformations of RP. We will identify a
Let be the group of one-dimensional scale transformations, and identify each go with the point ce (0, ∞). Show that the left and right invariant Haar densities for this group are h'(c) = h'(c) =1/ с.
Prove Result 2.
Calculate J in Example 17.
Assume that X₁ and X2 are independent observations from a common density,f. It is desired to test Ho: f is N(0, 1) versus H: f is €(0, 1) under "0-1" loss.Show that the uniformly most powerful invariant tests reject Ho if |X₁-X₂> K and accept Ho otherwise, where K is a constant which
Assume that X= R, and let be the group of orthogonal transformations of(see Exercise 5). Show that T(x) =Σ_, x? is a maximal invariant.
Verify that T(x) in Example 14 is indeed a maximal invariant.
Verify that T(x) in Example 13 is indeed a maximal invariant.
Assume that X₁,..., X,, are positive random variables with a joint density of the form 0 "f(x₁/0,..., xn/ 0), where 0 >0 is an unknown scale parameter. It is desired to estimate under a loss of the form L(0,a) = W(a/0).
Assume that X₁,..., X, is a sample from the N(0, o²) distribution, where o >0.(a) Find the best invariant estimator of o2 for the loss L(o2,a) = (1-a/o2)2.(b) Find the best invariant estimator of o for the loss L(σ,a) = (1-a/o)².
Assume that X₁,..., X, is a sample from the U(0, 0) distribution where 0> 0.Find the best invariant estimator of 8 for the loss(a) L(0,a) =(1-a/8)2.(b) L(0,a) =1-a/0.(c) L(0,a) = 0 if c ¹sa/0≤c; L(0,a) = 1 otherwise.
Assume that X₁,..., X, is a sample from the Pa(0, 1) distribution, where 0>0.Find the best invariant estimator of 8 for the loss(a) L(0,a) = (1-a/0)², when n3.(b) L(0,a) = log a -log 0.(c) L(0,a) =1-a/0, when n≥2.
Assume that X₁,..., X is a sample from the G(a, B) distribution, with a known and ẞ unknown. Find the best invariant estimator of B for the loss(a) L(B,a) =(1-a/B)2.(b) L(B,a) =(a/B)-1-log(a/В).
Assume that X₁,..., X, is a sample from the half-normal distribution, which has density/(x\ )=() exp{- (x-0)?} 1a)(x)Show that the best invariant estimator of e under squared-error loss is exp{-n[(min x₁) - x]²/2} 8(x) = x- (2nп)1/2 P(Z
Assume that X₁,..., X, is a sample from the U(0-,0+) distribution. Find the best invariant estimator of 8 for the loss(a) L(0,a) = (0-a)2.(b) L(0,a) = 0-a.(c) L(0,a) = 0 if 0-ac; L(0,a) = 1 if 0-a> c.
Assume that X₁,..., Xn is a sample from a N(6, 1) distribution, and that it is desired to estimate 0. Find the best invariant estimator of 8 for the loss(a) L(0,a) = 8-a|", r≥1.(b) L(0,a) =0-a', r c.
Assume that X has a unimodal density of the form f(x -0) on R (so that f(z) is nonincreasing for z ≥0) and that it is desired to estimate e under a loss of the form L(0,a) = W(0-a), where W(z) is nondecreasing for z 0. Prove that 8(x) = x is a best invariant estimator.
Assume that X has a density of the form f(x-0) on R¹, and that it is desired to estimate under a strictly convex loss which depends only on 8-a. Prove that 8(x) = x is a best invariant estimator.
Assume that X₁,..., Xn is a sample of size n from the density f(x|0) =exp{-(x-0)}1(0,0)(x).Find the best inyariant estimator of for the loss function(a) L(0,a) =(0-a)².(b) L(0,a) =8-a.(c) L(0,a) =0 if |8-a≤c; L(0,a) = 1 if |0- a> c.
Assume that X= (X₁, X₂) has a density (on R2) of the form f(x₁ -01, x2-02), and that it is desired to estimate (01+02)/2 under loss L((01, 02),a) = ((0+ 02) - a)².Here = R¹ and = R².(a) Show that the problem is invariant under the group 5= }8:8)x) = (x₁+C₁, x2+ c2), where c = (C₁,
Assume that X~N(0+2, 1), and that it is desired to estimate under squarederror loss. Find the best invariant estimator of 0.
Assume that X~(0, B), where B is known, and that it is desired to estimate 8 under loss L(0, a)}-[0 = if|e-a=c, 1 if 8-a|>c. Find the best invariant estimator of 0.
Assume that X~N, (0, I,), where = RP. The loss function is of the form L(0,a) = W(0, a), so that it depends on only through its length. Let be the group of orthogonal transformations of RP. (Thus G=}80:80)x) = 0x, where is an orthogonal (pxp) matrix}.)(a) Verify that is indeed a group of
Assume that X~N, (θ,1, θŽ1,), where 1 = (1, 1,..., 1)'. The parameter space is={(01, 02): 0€Rand 02> 0}. It is desired to test Ho: 01 ≤0 versus H: 0>0 under "0-1" loss. Let G={8:8e(x) = cx, where c >0}.(a) Show that the decision problem is invariant under , and find & and(b) Find the form of
Assume that X~N(01, 02I), where 1 = (1, 1,..., 1)'. The parameter space is={(01, 02): 0€R¹and 0₂>0}. Let = R¹ and L(0,a) = (0₁-a)²/03. Finally,let 9={8h.c:8b.c (x) = bx+ c1, where ceR¹and b≠ 0}.(a) Show that is a group of transformations.(b) Show that the decision problem is invariant
Verify that and are groups of transformations of and 4, respectively.
Verify that the permutation group, given in Example 6, is indeed a group of transformations.
Prove that, if = {01, 02} and the risk set S is bounded from below and closed from below, then the minimax regret rule is an equalizer rule and is unique.
Assume that an S game has risk set S = {x€ R²: (x, - 10)2+(x₂-1)²≤4}.(a) Find the minimax strategy.(b) Convert S into S*, the corresponding risk set if regret loss is used. Find the minimax regret strategy.
Discuss whether or not the minimax rule in the situation of Exercise 40 is reasonable from a Bayesian viewpoint.
Discuss whether or not the minimax rule in the situation of Exercise 38 is reasonable from a Bayesian viewpoint.
In Exercise 65 of Chapter 4, assume that no data, x, is available.(a) Using the given loss matrix, find the minimax rule for blood classification,and find the least favorable prior distribution.(Hint: Consider the subgame involving only the choices A, B, and O.)(b) Find the Bayes rule (for this
For the situation of Example 25, show that 80(x) =0 is the unique minimax rule (among the class 7* of all randomized rules).
Verify Corollary 1.
Suppose p =4, = diag{16, 8, 4, 2), Q = I, µ = 0, A = diag{1, 24, 1, 1}, and that x = (3, -12, 1, -1)' is observed.(a) Calculate &M. (Be careful about the order of the indices.)(b) Calculate & in (5.32).(c) Discuss the shrinkage behavior of & and &.(d) Suppose that Q were ¹(+A)1. Calculate and
Suppose that X~N(0, 1), known and p≥4, and that L(0, 8) =0-82(i.e., Q= I).(a) Show that the empirical Bayes estimator defined in (4.33) through (4.35),namely p-3 (p-3)0 8(x) =x- min is minimax.(b) Show that the hierarchical Bayes estimator defined in (4.60) and (4.63),namely(p-3) 6(x) =x Σ (x;
Verify that the estimator in (5.32) is minimax.
Verify that (5.28) is an unbiased estimator of risk. (Hint: Follow the hint in Exercise 53, but replace the integration by parts with an appropriate summationby parts.)
Verify that (5.26) is an unbiased estimator of risk:(a) In the special case where Q= = I.(b) In the general case. (Hint: First make a linear transformation, so that * = I.Then expand the risk as R(0*, &*) = Ea{(X*-0*)'Q*(X* -*) +2y*I(X*)Q*(X* -0*)+ y*(X*)'Q*y*(X*)}.Finally, verify and use the fact
Suppose that X~N(0, ) and L(0, 8) =(0-8)'Q(0-8), where and Q are known (pxp) positive definite matrices. Consider the linearly transformed problem defined by X* = BX, 0* = B0, 8* = Bô, and L*(0*, 8*) =(0*-8*)'Q*(0* -&*), where B is a (pxp) nonsingular matrix and Q* = (B')¹QB-1. Show that:(a) If
Assume that X is an observation from the density f(x 0) =e_(x-)I(00)(x)and that the parameter space is = {1, 2, 3}. It is desired to classify X as arising from f(x 1), f(x 2), or f(x 3), under a "0-1" loss (zero for the correct decision, one for an incorrect decision).(a) Find the form of the Bayes
Let X~P(0), where= {1, 2}, = {a₁, a2, a3}, and the loss matrix is a U2 аз0 20 10θ2 50 0 20(a) Show that the Bayes rules are of the following form: decide a if x < k- (log 3)/log 2, decidea, if k-(log 3)/(log 2)
Let X~P(0), and assume that it is desired to test Ho: 0 = 1 versus H: 0=2 under "0-1" loss. Find the form of the Bayes tests for this problem. Using these Bayes tests, determine an adequate number of points in A(S) (the lower boundary of the risk set) and sketch this lower boundary. Find the
In Exercise 47, assume that we are allowed a third possible action, namely deciding that the experiment is inconclusive. If the loss for this decision is I(where 1
We are given two coins and are told that one of them is fair but that the other is biased, having a known probability p> of falling heads when flipped. The problem is to decide which coin is biased on the basis of n tosses of each coin.Assuming "0-1" loss, determine the minimax procedure.
Let (X,, Х00) be a sample from a N(0, 25) distribution. It is desired to test Ho: 0 =0 versus H₁: 0 =2 under "0-K," loss, where K, = 10 and K₁ = 25. Obtain the minimax procedure and compute the least favorable prior distribution.
Assume X has density f(x 0)=2-(x++), for x =1-0,2-0,....It is desired to test Ho: 0 =0 versus H₁: 0 = 1 under "0-1" loss.(a) Sketch the risk set S.(b) Find a minimax decision rule.(c) Find a least favorable prior distribution.(d) Find a nonrandomized minimax decision rule.
Assume X~ B(10, 0) and that it is desired to test Ho: 0 = 0.4 versus H₁: 0 =0.6 under "0-1"loss. Obtain the minimax procedure and compute the least favorable prior distribution.
For the following situations involving the testing of simple hypotheses, sketch the risk set S, and find the minimax rule and the least favorable prior distribution.Assume the loss is "0-1" loss.(a) Ho: X ~ U(0, 1) versus H₁: X~ U(, 3).(b) Ho: X~B(2,) versus H₁: X~ B(2,).(c) Ho: X ~ G(1, 1)
Give an example, for finite O, in which the set of loss points W is closed but not bounded, and in which the risk set S is not closed. (Hint: Find a set W which is closed but unbounded, for which the convex hull of W is not closed.)
(Blackwell and Girshick (1954).) Let X₁,..., X, be a sample from a N(0, o²)distribution. It is desired to test versusΗο: σ² = σο, -00
Assume X~ B(1, 0), and that it is desired to estimate under loss L(0,a) = 0-a. Find the minimax rule, the least favorable prior distribution, and the value of the game.
(Ferguson (1967).) Let be the set of all distributions over [0, 1], let =[0, 1],and let L(0,a) =(μ₁-a)2, where u₁ is the mean of the distribution 0. (Notethat refers to the entire distribution, not just a parameter value.) Let X,..., X"be a sample of size n from the distribution 0, and let X =
(Ferguson (1967).) Let =[0, 1), =[0, 1], X~ Ge(1-0), and L(0,a) = (0-a)2/(1-0).(a) Write the risk function, R(0, 8), of a nonrandomized estimator 8 as a power series in 0.(b) Show that the only nonrandomized equalizer rule is 8o(i)= if i = 0.1 ifi≥ 1.(c) Show that a nonrandomized rule is Bayes
Let = (0, 1), =[0, 1], X~B(n, 0), and L(0,a) = (0-a)²/[0(1-0)]. Show that 8(x) = x/n is a minimax estimator of 0, and find the least favorable prior distribution.
Let = (0, 00), = [0, ∞), X~ P(0), and L(0,a) = (0-a)2/0.(a) Show that 80(x) = x is an equalizer rule.(b) Show that so is generalized Bayes with respect to 7(0) =1 on . (c) Show that & is minimax.(d) Verify minimaxity of (x) =x in Example 21.
Assume X~N₂(0, Σ), Σ known, and that it is desired to estimate e under a quadratic loss. Prove that (x) = x is minimax, and that it is an equalizer rule.
Assume that the waiting time, X, for a bus has a U(0, 0) distribution. It is desired to test Ho: 010 versus H₁: 0 > 10. The loss in incorrectly deciding that 0≤10 is (0-10)2, while the loss in incorrectly concluding that > 10 is 10. The loss of a correct decision is zero. (a) If n independent
An IQ test score X~N(0, 100) is to be observed, on the basis of which it is desired to test Ho: 0≤100 versus H: 0> 100. The loss in incorrectly concludingthat 0≤ 100 is 3, while the loss in incorrectly deciding that 0 > 100 is 1. A correct decision loses zero. What is the minimax decision if x
Suppose that L(0,a) is strictly convex in a for each 0. If So is an equalizer rule which is also admissible, prove that So is the unique minimax rule.
Prove Theorem 18.
Prove Theorem 17.
For the situation of Exercise 10, find &-minimax strategies for II.Subsection 5.3.2
(a) Prove that if S, and S, are closed disjoint convex subsets of R", and at least one of them is bounded, then there exists a vector e R" such that sup &'s²< inf 's'.SES S'ES(b) Find a counterexample to the above result if both sets are unbounded.
Prove Theorem 10 of Subsection 5.2.4.
Give an example of a finite game in which a nonrandomized strategy ao is a minimax strategy and is Bayes with respect to π, yet is not maximin.
(a) Prove that, if S is closed and bounded, a Bayes risk point always exists.(b) Give an example in which S is closed and bounded from below, and yet a Bayes risk point does not exist for at least one .
Let K = {x€ R²: (x -8)2+(x2-8)2100}, Q= (x∈ R2: x₁ ≥0 and x20}, and P= {(0, 2)'}. Consider the S-game in which S= KQ-P.(a) Graph the set S, and describe the set(i) of admissible risk points,(ii) of Bayes risk points,(iii) of Bayes risk points against which have no zero coordinates.(b) Show
For the following S games find: (i) the value; (ii) the minimax risk point; (iii)the maximin strategy and the tangent line to S at the minimax point; and (iv)the Bayes risk point with respect to = (3, 3)',(a) S={x€ R2: (x₁ -8)²+(x2-3)³≤9}(b) S={x€ R2: (x₁- 10)²+(x2- 10)2400}.(c) S=
Consider the finite game with loss matrix a A2 Aз A4 as 0 4 5 8 2 6θ2 1 8 5 6 6(a) Graph the risk set S.(b) Find the minimax strategy and the value of the game.(c) Find the maximin strategy, and determine the tangent line to S at the minimax point.
Prove Theorem 9.
Prove Theorem 7.
Prove that, in a finite game, the sets of maximin and minimax strategies are bounded and convex.
There is an interesting card game called liar's poker (also known by a number of less refined names). Consider the following very simplified version of the game. A large deck of cards contains only 2s, 3s, and 4s, in equal proportions.(The deck is so large that, at all stages of the game, the
Tom and Burgess together inherit an antique pipe, valued at $400. They agree to decide ownership by the method of sealed bids. They each write down a bid and put it in an envelope. They open the envelopes together, and the higher bidder receives the pipe, while paying the amount of his bid to the
The good guys (army G) are engaged in a war with the bad guys (army B). в(player I) must go through mountain pass 1 or pass 2. G (player II) has a choice of three strategies to defend the passes: a₁-use all available men to defend pass 1; a2-use all available men to defend pass 2; a3-use half of
(Blackwell and Girshick (1954).) Solve the following game of hide and seek. II can hide at location A, B, C, or D. Hiding at A is free, hiding at B or C costs 1 unit, and hiding at D costs 2 units. (The cost is to be payed to I at the end of the game.) I can choose to look for II in only one of A,
Let = & = {1, 2,..., m}. Assume the loss matrix is given by(1 ifi-j =0 or 1, 10 otherwise.Solve the game.
Solve the game with loss matrix? a 02 03 4 0 02 0 4 12 0 0 0 0
Solve the game of scissors-paper-stone, in which each player can choose between the strategies scissors (s), paper (p), and stone (r), and the loss matrix is
observes 8, then chooses a number 2, where 02s1. The number z is then told to II, who proceeds to choose a numbera, 0sas1, and pays to I the amount [z+18-a]. II, and II, can agree, before the game, on the manner in which II, will choose z.
(Blackwell and Girshick (1954).) Consider the following game: I chooses a number 0, where 001. II consists of two partners II, and II.
Assume [0, 1]. Find minimax and maximin strategies and the value of the game for each of the following losses: (a) 0-20a +a/2, (b) (0-a), (c) 02-0+a-a, (d) a 20a +1, - (e) 8-a-(-a).
Prove that, if an equalizer strategy is admissible, then it is minimax (or maximin).
In Example 2, find the minimax and maximin strategies and the value of the game.
Prove that, if an equalizer strategy is admissible, then it is minimax (or maximin). 6. Prove Theorem 3.
Prove Theorem 1. 6. Prove Theorem 3. 7. In Example 2, find the minimax and maximin strategies and the value of the game.
For a two-person zero-sum game in which is a closed bounded convex subset of R and L is convex in a for each ee, prove that there exists an a A such that sup L(0,a) V.
Showing 100 - 200
of 884
1
2
3
4
5
6
7
8
9
Step by Step Answers