New Semester Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
bayesian biostatistics
Statistical Decision Theory And Bayesian Analysis 2nd Edition James O. Berger - Solutions
Prove that a unique minimax strategy is admissible.
Prove Lemma 2.
Prove Lemma 1.
In Example 55, verify that any randomized action can be dominated by an action in the interval [0.75, 0.79].
In Example 53, suppose the expert would tend to report p with a standard deviation of, if 0 were true, and would tend to report p with a standarddeviation of , if 02 were true. (The expert is mixed up in his reports.)(a) Determine the analog of (4.154) for this situation.(b) Which "expert" would
Experts 1,..., m report estimates, X₁,..., X, for 0. SupposeX= (X₁,., X)'is Nm (01, ), where 1 = (1,..., 1)'and has diagonal elements 1 and known off-diagonal elements p > 0. (Thus p reflects the fact that there is a dependenceamong experts.) Suppose e is given the noninformative prior π(0)
In Example 50, suppose the four classifications under consideration are (corn, forest, grass, dirt). The likelihood, for the observed x, is the vector1= (f(x 0),f(x 02), f(x 03), f(x 04)) = (0.3, 0.4, 0.6, 0). The topographical informationyields (in similar vector form) 2= (0, 0.2, 0.6, 0.2). The
For the situation of Figure 4.4, suppose that it is known only that 1S1S2 and 0.3 To = 0.5. Find the range of possible (x).
Utilize Figure 4.4 to sketch a graph of #(00x) versus , when o is fixed at .
Repeat Exercise 40, assuming that the observations are N(0, o2), o2 unknown(instead of o² = 1), and that g(0) is €(4.01, 0.34) and o² has (independently)the noninformative prior **(2) = 02:(a) Use the (4.150) approximation.(b) Use the approximation to m(xg1, **) in Exercise 144.
If g in (4.145) is a €(µ, т) density, show, via a Taylors series argument, that T m(x 8, 7**) =[T+(モール (1+(n-1)5[3(8-)²-7] n(n-3)[+²+(x-μ)²]²(This is very accurate for moderate or large n, (x- μ)2, ог т.)
Develop versions of formulas (4.145) through (4.150) when ** is an JG(a, B)distribution.
Using Result 9, approximate the posterior mean and variance in the situation of Exercise 139, when x = 1. (Calculate the posterior mean by defining g(0) = 0+2, so that E(0[0] = E™( 9\x[g(0)]-2; this ensures that g is effectively positive for the calculation.)
Using Result 9, approximate the posterior mean and variance in the situation of Exercise 109(c).
Suppose π(0) is a hierarchical prior of the form 7(0)= 7,(0|1)73(A)dA.Imagine that it is possible to easily generate random variables having densities 12 and 1 (for each A). Give a Monte Carlo approximation to Em(0) [g (8)]which does not involve evaluation of 7(0).
Suppose that X~N(0, 0.2) is observed, and that the prior for 0 is €(0, 1).(a) If it is possible to generate either normal or Cauchy random variables, describe two methods for calculating the posterior mean by Monte Carlo integration.(b) Which method in (a) is likely to give an accurate answer
For the situation of Example 47, show that the given {C(x), a(x)} is an inadmissible procedure under loss (4.136)
Suppose X~N(0, 1), =R', C(x)=(-∞,x), a(x) =, and {C(x), a(x)} is the system of confidence statements that is to be used. Show that this choice is weakly incoherent.
In Example 47, verify that a(x) = is weakly incoherent. (The loss for a randomized betting strategy is expected loss, as usual.)
In the situation of Example 45, show that 8 can be beaten in long-run use, in the sense of (4.135), for any sequence 0 = (B), B(2),...) that is bounded away from zero.
Prove part (b) of Theorem 10.
Assume X~ 9(a, B) (a known) is observed, and that it is desired to estimate B under loss L(B,a) =1-a12ВIt is decided to use the improper prior density π(B)= B2(a) Show that the generalized Bayes estimator of ẞ is 8°(x) =-a +2'(b) Show that 8 is inadmissible.(c) Show that 8 can be beaten in
In Example 46, for estimating n = 0²,(a) verify (4.134);(b) show that, on any bounded set of x, there exists an admissible estimator which is arbitrarily close to 8"(x) = x²+p.
If X ~B(n, 0) and it is desired to estimate under squared-error loss, show that 8(x) =x/n is admissible. (Hint: Consider the improper prior densityπ(0)=0-1(1-0)-1. You may assume that all risk functions R(0, 8) are continuous.)
Prove Theorem 9.
Prove Theorem 8.
In Example 42, find the marginal posterior distribution of 0, given(X1.., Хn,Y1,,ym), for the noninformative prior (0, o², h1.., ) =
In Example 41, suppose that o is chosen to be the noninformative priorп0(0, 02) = 02. Seven observations are taken, the result being x=(2.2, 3.1, 0.1, 2.7, 3.5, 4.5, 3.8). Graph m(xo,a) as a function ofa, and comment on whether or not it seems reasonable, here, to assume that fo is normal.
Repeat Exercise 122, assuming that the prior is п(0, 0²) = п*(0)o2, where*(0) is (1.3, 0.14). (The quartiles of * match up with the quartiles of a N(1.3, 0.04) prior.)
Find the posterior variance for the posterior in (4.130), expressing the answer solely in terms of the numerator and denominator of (4.131).
Verify (4.127).
After one month of the season, eleven baseball players have batting averages of 0.240, 0.310, 0.290, 0.180, 0.285, 0.240, 0.370, 0.255, 0.290, 0.260, and 0.210.It is desired to estimate the year-end batting average, 0r, of each player; the one month batting averages, X, can be assumed to be N(0,
In the situation of Exercise 28, suppose that past records show that neutron stars average about 1.3 in mass, with a standard deviation of 0.2. This prior information is rather vague, so the robust Bayesian analysis with in Subsection 4.7.10 (for unknown o2) is to be used, Calculate the robust
The sufficient statistic from a regression study in econometrics is the least squares estimator x = (1.1, 0.3, -0.4, 2.2)', which can be considered to be an observation from a №(0, 0.314) distribution. Economic theory suggests that 01-021, and 03 =-1. These linear restrictions are felt to hold
In the situation of Exercise 106:(a) Find the posterior mean and covariance matrix for 0, using the relevant robust prior, 3, from Section 4.7.10.(b) Determine an approximate 95% credible ellipsoid for 0.
Conduct a Bayesian test in Exercise 40, with the N(4.01, 1) choice for 81 replaced by the "robust" from Subsection 4.7.10, with μ =4.01 and A = 1.
In the situation of Example 1, where X~N(0, 100), suppose the robust prior,(in Subsection 4.7.10), is to be used, with μ = 100 and A =225, instead of the conjugate N(100, 225) prior.(a) If x = 115, calculate the robust posterior mean and posterior variance, and find an approximate 95% credible set
Show that (4.113) is necessary and sufficient for to be a proper prior.
Using Theorem 5, find the ML-II prior for the indicated I when 1f(x|0) = exp 20 σσ> 0 known.
Suppose X~ N(0, σ²) and πο is Ν(μ, τ²), where σ², μ, and τ² are known. Let I' be the e-contamination class of priors with 2 of the form (4.110) (where0=µ). Show that, as x-μ|→0, the ML-II prior yields a posterior, (0 x),which converges (in probability, say) to the N(x, o²)
In Example 1, where X~N(0, 100) and has the N(100, 225) prior density,πο, suppose it is desired to "robustify" πo by using the ML-II prior from the e-contamination class, with 2 of the form (4.110) (where 00 = 100) and & = 0.1.(a) If x =115 is observed, determine the ML-II prior and the
In Example 36, suppose that X~N(0, o2) and o is N(μ, 2), where o², μ, and 2 are all assumed to be known. Verify that, as x - μ→0, it will happen that (0|x) converges to () (in probability, say; to be more precise, show that the posterior distribution of (0-x) converges to a point mass at
Repeat Exercise 16, using the "alternative Jeffreys noninformative prior" (see Subsection 3.3.3)π(θ, σ²) = σ31(0.)(σ²).(This corresponds to "(0, 0) = 02I(0,0) (o) for σ.) Do the two noninformative priors give similar answers?
Suppose that X, X2, Х3, and X4 are i.i.d. €(0,1) random variables. The noninformative prior m(0) = 1 is to be used. If the data is x = (1.0, 6.7, 0.5, 7.1),show that none of the approximations to the posterior in Result 8 will be reasonable.
Suppose that X = (X,, X₁)' ~ M(n, 0), and that has a (a) prior distribution. Determine the four approximations to the posterior distribution of given in parts (i) through (iv) of Result 8. (Note that X is the sufficient statistic from n i.i.d. M(1, 0) random variables.)
Suppose X1,.., X, is an i.i.d. sample from the &(0) density. The prior density for is G(10, 0.02).(a) Determine the exact posterior density for 0, given x1,..., X.(b) Determine the approximations to this posterior density from parts (i)through (iv) of Result 8.(c) Suppose n = 5 and Σ x = 25.
In Example 1, suppose it is desired to estimate 8 under squared-error loss, but that an approximate I-minimax estimator is desired for robustness reasons. Suppose I is the e-contamination class, with 2 (all distributions), -0.1, and being the .N(100, 225) distribution. Find the appropriate
In Example 30 (continued), show that (4.105) holds for 68M, if 2 contains all (-k, k) distributions for kko, ko an arbitrary constant.
Archaeological digs discovered Hominid remains at three different sites. The remains were clearly of the same species. The geological structures within which they were found were similar, coming from about the same period.Geologists estimate this period to be about 8 million years ago, plus or
In Example 1, suppose it is desired to estimate e under squared-error loss, and that, for robustness reasons, it is deemed desirable to use a limited translation estimator, instead of the posterior mean μ(x) = (400+9x)/13. Suppose that an increase by 20% over the (minimax) risk of o² = 100 is
Assume X~ U(0, 0) is observed, and that it is desired to estimate under squared-error loss. Let 1 = {T1, 2}, where is a (2, B) prior density and 7₂ is the prior density 0-4 2(0)=6a20(1+) 20(1+) 40)(6)The median of the prior is felt to be 6.(a) Determine a and B.(b) Calculate the Bayes estimators
Assume X~ (0) is observed, where = {1, 2}. It is desired to test H: 0 =versus H₁: 0 =2 under "0-1" loss. It is known that a ≤ n ≤b, where no is the prior probability of Ho. You may assume that it is only necessary to consider the most powerful tests, which have rejection regions of the form
Assume X~ B(1, 0) is observed, and that it is desired to estimate under squared-error loss. It is felt that the prior mean is μ =. Letting I be the class of prior distributions satisfying this constraint, find the -minimax estimator of 0. (Show first that only nonrandomized rules need be
Assume X~B(n,0) is observed, and that it is desired to estimate under squared-error loss. Let I be the class of all symmetric proper prior distributions.Find a -minimax estimator. (Hint: Use Exercise 99, considering Bayes rules for conjugate priors.)
Prove that, if a decision rule &* has constant risk (R(0, 8*)) and is Bayes with respect to some proper prior Г, then 8* is Г-minimax.
Assume that R(0, 8*) is continuous in 0.(a) Prove that, if I'= {all prior distributions}, then sup r(π, δ*) = sup R(θ, 8*). πΕΓ θεΘ (b) Prove that, if = (-00, 00) and Γ = {π: π is а Ν(μ, 7²) density, with -0
Let X~N(0, 1) be observed, and assume it is desired to estimate under squared-error loss. The class of possible priors is considered to be Г={п:пis a N(0, ²) density, with 0
Using the &-contamination class described in Example 26, with ε = 0.1, repeat the analysis, and compare the resulting indicated robustness with the answer for the &-contamination class with 2 = {all distributions}, in (a) Exercise 92;(b) Exercise 93: (c) Exercise 94.
In the situation of Exercise 26, find the range of the posterior probability of the 95% HPD credible set, as ranges over the e-contamination class with€ = 0.05, being the Be(1,9) prior, and 2 = {all distributions}.
It is desired to investigate the robustness of the analysis in Exercise 40 with respect to the &-contamination class of priors, with 6 being the prior described in Exercise 40, & =0.1, and 2 = {all distributions). Find the range of the posterior probability of Ho: 0 =4.01.
In the situation of Exercise 92, suppose that it is desired to test Ho: 0100 versus H₁: 0> 100. With the same and x = 115, find the range of the posterior probability of Ho.
In Example 1 (continued) in Subsection 4.3.2, the 95% HPD credible set, for x = 115, was found to be C = (94.08, 126.70). Let I be the &-contamination class of priors with o equal to the N(100, 225) distribution, & = 0.1, and 2 = {all distributions}. Find the range of the posterior probability of C
Derive formulas analogous to (4.85) and (4.86) when (0) ={1 Eпr;(), where e; 0, & =1, and the m; are densities.
Suppose X~N(0, 1) and п = (0.9) по+(0.1)q, where 7o is N(0, 2) and q is N(0, 10).(a) If x = 1 is observed, find 7(0x) and the posterior mean and variance.(b) If x =7 is observed, find "(0x) and the posterior mean and variance.
Prove Lemma 2.
Consider the &-contamination class in (4.75), and suppose that o is a N(0, 2.19)density. How large must & be for the 6(0, 1) density to also be in this class?
In the sítuation of Exercise 86, suppose it is desired to test Ho: 0≤0 versus H: 0>0 under "0-1" loss.(a) Find the range of the posterior probability of Ho, as ranges over I, when x = 0 is observed.(b) Find the I-posterior expected loss of the Bayes action with respect to the N(2, 3) prior, when
Assume X~ N(0, 1) is observed, and that it is desired to estimate under squared-error loss. It is felt that has a N(2,3) prior distribution, but the estimated mean and variance could each be in error by 1 unit. Hence the class of plausible priors isΓ ={π: π is a N(μ, τ²) density with 1 μ=3
Consider the situation of Exercise 68. One prior that is entertained for the i.i.d.0 is that they are N(3.30, ), o unknown. A second prior entertained is that they are i.i.d. N( o), having a second stage N(3.30, (0.03)2) distribution, In both cases, assume that nothing is known about o, and that it
Suppose X~N (0, ) and 0~N,(u, A), where μ,, and A are known. Develop analogs of (4.78) and (4.79) as measures of "surprising" data. What will tend to happen to these measures for large p? Are these measures likely to be useful for large р?
Repeat Exercise 74 from the hierarchical Bayes perspective, assuming a constant second stage prior for (B, B2, B3, o). Compare the answers so obtained with those from Exercise 74.
Repeat Exercise 73 from the hierarchical Bayes perspective:(a) With a constant second stage prior for ( ).(b) With a N(3.30, (0.03)2) second stage prior for μ, and (independent)constant second stage prior for o(c) Compare these answers with those from Exercise 73.
Repeat Exercise 79(a) and (b), supposing that the measurements in Exercise 68 had common unknown variance o. Available to estimate o was an(independent) random variable S², S2/o having a chi-squared distribution with 10 degrees of freedom. Observed was s2=0.0036, and o is to be given the
In the situation of Example 14 (continued) in Subsection 4.6.2, find the volume of the approximate 100(1-a)% credible ellipsoid for 0, and compare it with the volume of the classical confidence ellipsoid Co(x) ={0: 0-x2/100≤x30(1-a)}.
Repeat Exercise 68 from the hierarchical Bayes perspective:(a) With a constant second stage prior for (, ).(b) With a N(3.30, (0.03)2) second stage prior for , and (independent)constant second stage prior for σ(c) Compare the answers obtained with each other and with those from Exercise 68, and
Prove Result 7.
For i = 1,..., p, suppose that the X, are independent N(0,, 1) random variables, and that the 0, are i.i.d. from a common prior πo. Define mo by mo(y)= [f(0)m(0)d where f is the N(0, 1) density.(a) Show that the posterior mean for 04, with given 7o, can be written m(x₁) μo(x₁) = x,
In the two situations of Exercise 28 of Chapter 3, determine the empirical Bayes estimates of 0, and 02 that result from use of the nonparametric ML-II prior.
In Example 15, graph the likelihood function for o, (i.e., (4.39) for the given data), choosing ẞ to be ẞ in (4.40) (replacing ôby o). Does this strongly indicate that o is very close to zero?
In Exercise 68, suppose that the 0, are thought to possibly have a quadratic relationship of the form 0 = B₁+B2i+ B3i²+Ei where the B are unknown and thee, are independently N(0, 0), also unknown. (The current 6, being the sixth measurement, has index i =6.) Find the Morris empirical Bayes
Repeat Exercise 68, supposing that the first three past measurements were N(0,, (0.03)2) (less accurate measurements were taken), and that the subsequent measurements (including the current one) were N(0;, (0.02)²).
Suppose, in Exercise 68, that the measurements had common unknown variance o, and that (0.02)2 was the usual unbiased estimate of o based on 10 degrees of freedom. Find a reasonable empirical Bayes estimate and 95% credible set for 0.
Show that the VEB in (4.34) can be larger than o. Is this reasonable or unreasonable?
In the situation of Exercise 4, the journal editors decide to investigate the problem. They survey all of the recent contributors to the journal, and determine the total number of experiments, N, that were conducted by the contributors.Let x denote the number of such experiments that were reported
A steel mill casts p large steel beams. It is necessary to determine, for each beam, the average number of defects or impurities per cubic foot. (For the ith beam, denote this quantity 04.) On each beam, n sample cubic-foot regions are examined, and the number of defects in each region is
At a certain stage of an industrial process, the concentration, 6, of a chemical must be estimated. The loss in estimating is reasonably approximated by squared-error loss. The measurement, X, of has a N(0, 0.02)2) distribution.Five measurements are available from determinations of the chemical
Solve the decision problem of choosing a credible set, C(x), for and an associated accuracy measure, a(x), when X~N(0, 1) is to be observed, 0 is given the noninformative prior π(0) = 1, and the loss is given in (4.136) with c₁ = 1, c2 = 5, c3 = 1, and μ(C(x)) being the length of C(x)
Verify the statement in Subsection 4.4.4, that the optimal Bayes choice of a(x)is the posterior probability of C(x).
A device has been created which can supposedly classify blood as type A, В, AB, or O. The device measures a quantity X, which has density f(x 0) =e_(x-( I)@( )x).If 0 < 0
A missile can travel at either a high or a low trajectory. The missile's effectiveness decreases linearly with the distance by which it misses its target, up to a distance of 2 miles at which it is totally ineffective.64.If a low trajectory is used, the missile is safe from antimissile fire.
A company periodically samples products coming off a production line, in order to make sure the production process is running smoothly. They choose a sample of size 5 and observe the number of defectives. Past records show that the proportion of defectives, 0, varies according to a Be(1,9)
In the situation of Exercise 26, leta, denote the action "decide 0≤ 0≤0.15,"anda, denote the action "decide 0 > 0.15." Conduct the Bayes test under the loss(a) "0-1" loss, if 0 >0.15, (b) L(0,a) =0 if 0≤0.15.L(0, a)=[2 if 0 ≤0.15,[0 if 0>0.15.
In the situation of Exercise 9, find the Bayes estimator of under loss k L(0,a) = (0,-a,)2. i=1 Show that the Bayes risk of the estimator isα-Σααρ(αρ+1)(αο+n)where ao =30
In the IQ example, where X~N(0, 100) and 0~N(100, 225), assume it is important to detect particularly high or low IQs. Indeed the weighted loss L(0,a) = (0-a)²e(0-100)2/900 is deemed appropriate. (Note that this means that detecting an IQ of 145 (or55) is about nine times as important as detecting
Suppose X~N₂(0, I2), L(0,a) = (0'a-1)2, A = {(a₁, a2)':a, ≥0, a2≥0, and a₁ + a2= 1}, and 0~№2(μ, B), u and B known. Find the Bayes estimator of 0.
In the situation of Exercise 26, find the Bayes estimate of under loss(a) L(0,a) =(0-a)²,(b) L(0,a) =0-a),(c) L(0,a) = (0-a)²/0(1-0),(d) L(0, a)=(0-a) if 0> a; L(0,a) =2(a-0) if 0a.
Assume 0, x, and a are real, π(0x) is symmetric and unimodal, and L is an increasing function of 0-a. Show that the Bayes rule is then the mode of 7(0x). (You may assume that all risk integrals exist.)
If X~G(n/2,20) and 0~ JG(a, B), find the Bayes estimator of under loss/a 1² 1 L(0,a) =-1 =292 (a-0)².
If X~B(n, 0) and 0~ Be(a, B), find the Bayes estimator of under loss(0-a)2 L(0,a) =0(1-0)(Be careful about the treatment of x = 0 and x = n.)
Prove Result 6 of Subsection 4.4.2. (You may assume that 7(0x) exists and that there is an action with finite posterior expected loss.)
Showing 200 - 300
of 884
1
2
3
4
5
6
7
8
9
Step by Step Answers