New Semester Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
bayesian biostatistics
Statistical Decision Theory And Bayesian Analysis 2nd Edition James O. Berger - Solutions
Show that, in estimation of a vector 0= (01, 02,, 0p)' by a= (a,..., ap)"under a quadratic loss L(0,a) = (0-a)'Q(0-a), where Q is a (p xp) positive definite matrix, the Bayes estimator of is 8"(x) = E(0x[0].(You may assume that (0x) and all integrals involved exist.)
Prove Result 4 of Subsection 4.4.2. (You may assume that all integrals exist.)
Prove Result 2 of Subsection 4.4.1 for randomized rules. (You may assume that all interchanges in orders of integration are legal.)
In the situation of Example 12, find a number zo such that Z has probability 0.1 of being less than zo.
Verify, in Example 11, that the likelihood function varies by no more than 5%on (06-b, 00+b) under the given condition.
Suppose = (00-b, 00+b) and ₁ = 6. It is desired to test H: versus H₁: 01, with a prior as in (4.13). Suppose the likelihood function, f(x0),satisfies |f(x0)- y≤& on (for the observed x, of course). Letting m(x)= [ (x\0)dF*(0)defining ao to be the posterior probability of O, and letting a
Suppose X~ €(0, 1), and that it is desired to test Ho: 0 = 0 versus H: 0≠0.For any prior giving positive probability, o, to 0=0, show that ao, the posterior probability of Ho, converges to o as x→0.
Consider 9N= {g:g is N(00, +²), +20}.(a) Show, in the situation of Example 11, that lower bounds on a and B for this class of g₁ are given, respectively, by πo and 1 when z≤1, and by and if z > 1.(1-o) exp{z²/2}]-1 a L1+ πο zve B≥Vez exp{-z²/2}(b) Compare these bounds with those in
Prove Theorem 1.
Find the value of z for which the lower bound on B in (4.24) and the P-value of (z-1)+ are equal.
Suppose that X~ B(n, 0), and that it is desired to test Ho: 0 = 00 versus H: 00.(a) Find lower bounds on the posterior probability of Ho and on the Bayes factor for Ho versus H₁, bounds which are valid for any g1 (using the notation of (4.14)).(b) If n = 20, 00 = 0.5, and x = 15 is observed,
In the situation of Exercise 40:(a) Calculate the P-value against Ho: 0 = 4.01.(b) Calculate the lower bound on the posterior probability of Ho for any g1, and find the corresponding bound on the Bayes factor.(c) Calculate the lower bound on the posterior probability of Ho for any gi in (4.22), and
In the situation of Exercise 28, it is desired to test Ho: 0≤1 versus H₁: 0>1.Find the posterior probabilities of the two hypotheses, the posterior odds ratio, and the Bayes factor.39.40.(DeGroot (1970)) Consider two boxes A and B, each of which contains both red balls and green balls. It is
In the situation of Exercise 26, it is desired to test Ho: 0≤0.1 versus H₁: 0>0.1.Find the posterior probabilities of the two hypotheses, the posterior odds ratio, and the Bayes factor.
The waiting time for a bus at a given corner at a certain time of day is known to have a U(0, 0) distribution. It is desired to test H: 0≤ 0 ≤ 15 versus H: 0> 15. From other similar routes, it is known that has a Pa(5, 3) distribution.If waiting times of 10, 3, 2, 5, and 14 are observed at the
Show, in the continuous case, that the S-optimal 100(1- a)% credible set is given by (4.12), providing that {0: π(0x) = ks(0)} has measure (or size) zero.
In the situation of Exercise 33, suppose that the size of a credible set, C, for o is measured by S(C)= o1 do.(a) If n = 2, find the S-optimal 90% credible set for o.(b) If n = 10 and s² = 2, find the S-optimal 90% credible set for σ.(c) Show that the interval C = (as, bs) has the same size under
Suppose S²/o² is x²(n) and that o is given the noninformative prior 7(o0)=01 Prove that the corresponding noninformative prior for o is π*(o²) = o2, and verify, when n = 2 and s2 =2 is observed, that the 95% HPD credible sets forσ and for o² are not consistent with each other.
Find the approximate 90% HPD credible set, using the normal approximation to the posterior, in the situations of (a) Exercise 23; (b) Exercise 27; (c)Exercise 9.
Suppose X~N(0, 1), and that a 95% HPD credible set for e is desired. The prior information is that has a symmetric unimodal density with median 0 and quartiles ±1. The observation is x = 6.(a) If the prior information is modelled as a N(0, 2.19) prior, find the 90%HPD credible set.(b) If the prior
For two independent €(0, 1) observations, X₁ and X2, give the 95% HPD credible set with respect to the noninformative prior π(0)=1. (Note that it will sometimes be an interval, and sometimes the union of two intervals;numerical work is required here.)
Suppose X₁,..., X, are an i.i.d. sample from the U(0-,0+) density.(a) For the noninformative prior π(0) = 1, show that the posterior probability, given x= (x1,., xn), that is in a set C is given by[1+Xnia-Xna]I(0)d0 where xmin = min{x;}, xmax =max{x), and A =(xmax,Xmin +).
From path perturbations of a nearby sun, the mass of a neutron star is to be determined. Five observations 1.2, 1.6, 1.3, 1.4, and 1.4 are obtained. Each observation is (independently) normally distributed with mean 9 and unknown variance o2. A priori nothing is known about 8 and o², so the
The weekly number of fires, X, in a town has a (0) distribution. It is desired to find a 90% HPD credible set for 0. Nothing is known a priori about 0, so the noninformative prior (0) = 011(0,0) (0) is deemed appropriate. The number of fires observed for five weekly periods was 0, 1, 1, 0, 0. What
A large shipment of parts is received, out of which five are tested for defects.The number of defective parts, X, is assumed to have a B(5, 0) distribution.From past shipments, it is known that has a Be(1,9) prior distribution. Find the 95% HPD credible set for 0, if x = 0 is observed.
Find the 100(1- a)% HPD credible set in Exercise(a) 7, (b) 11, (c) 12(c), (d) 14 (when Σ x₁ = 1), (e) 15(a)(when n = 3, x = 1), (f) 16(b) (when n =2), (g) 16(c).
Electronic components I and II have lifetimes X, and X, which have 8(0) and (0) densities, respectively. It is desired to estimate the mean lifetimes 6, and 8. Component I contains component II as a subcomponent, and will fail if the subcomponent does (or if something else goes wrong). Hence 6
A production lot of five electronic components is to be tested to determine 6, the mean lifetime. A sample of five components is drawn, and the lifetimes XX, are observed. It is known that X,(0). From past records it is known that, among production lots, is distributed according to an (10, 0.01)
In Example 7 (continued), show that V"(x) is increasing in x, with V*(0)= (2/) and V() =.
Find the median of the posterior distribution and the posterior variance of the median in Exercise(a) 5 (when a = B = n = x =1), (b) 6 (when a = n =1), (c) 7,(d) 12(b) (when a = 1, n = 2), (e) 12(c), (f) 14, (g) 16(b) (when n =2). (h) 16(c).
Find the generalized maximum likelihood estimate of and the posterior variance or covariance matrix of the estimate in Exercise(a) 5, (b) 6, (c) 7, (d) 8, (e) 9, (f) 10, (g) 11,(h) 12(c), (i) 14, (j) 15, (k) 16(c), and (1) in Example 5.
Find the posterior mean and posterior variance or covariance matrix in Exercise(a) 5, (b) 6, (c) 7, (d) 8, (e) 9, (f) 10, (g) 11,(h) 12(b), (i) 12(c), (j) 14, (k) 15, (1) 16(b), (m) 16(c), and(n) in Example 5.
In the situation of Subsection 4.2.3, show that, if , is a N(0, n) prior density for 0, then π (θx):(a) converges pointwise to #(0x);(b) converges in probability to #(0х).
Show that mixtures of natural conjugate priors, as defined in (4.1), form a conjugate class of priors.
Assume X= (X₁,..., X,) is a sample from a N(0, o2) distribution, where and o2 are unknown. Let and o2 have the joint improper noninformative prior density(0, 02) = 0-21(0.0) (02)(In Subsection 3.3.3 it was stated that a reasonable noninformative prior for(0, 0) is (0, o) = 010.0)(σ). This
Assume X is B(n, 0).(a) If the improper prior density π(0) = [0(1-0)]1(0.1) (6) is used, find the(formal) posterior density of given x, for 1≤x≤n-1.(b) Find the posterior density of given x, when π(0) = I(0.1)(0).
Assume X= (X₁,..., X) is a sample from a P(0) distribution. The improper noninformative prior (0) = 0-¹1(0, (0) is to be used. Find the (formal)posterior density of given x, for x≠ (0, 0,..., 0). (If x= (0,.., 0), the(formal) posterior does not exist.)
General Motors wants to forecast new car sales for the next year. The number of cars sold in a year is known to be a random variable with a .№((10*) 0, (10°)2)distribution, where is the unemployment rate during the year. The prior density for next year is thought to be (approximately) N(0.06,
Suppose that X= (X,, X) is a sample from a N(0, o2) distribution, where both and o2 are unknown. The prior density of and o² isπ(θ, σ²) = πι(θσ²)π,(σ²), where π, (θ|σ²) is a Ν(μ, τσ²) density and π2(σ²) is an %(α, β) density.(a) Show that the joint posterior density of
Suppose that X= (X,., X)'~N (0, Σ) and that 0 has a N(u, A) prior distribution. (Here and µ are p-vectors, while Σ and A are (p×p) positive definite matrices.) Also, Σ, u, and A are assumed known. Show that the posterior distribution of 0 given x is a p-variate normal distribution with mean
Suppose that X = (X,..., X,) is a sample from a NB(m, 0) distribution, and that has a Be(a, B) prior distribution. Show that the posterior distribution of given x is Be(a + mn, (Σi x,)+B).
Suppose that X=(X₁,.., X)'~M(n, 0), and that 0=(01,..., 0x)' has a(a)prior distribution (α = (αι,..., αχ)'). Show thatthe posterior distribution of given x is ((α+x)).
Suppose that X is G(n/2,20) (so that X/0 is x2), while has an IG(a, B)distribution. Show that the posterior distribution of given x is JG(n/2+a, [x/2+B-1]-1).
Suppose that X= (X1,..., X,) is a random sample from a U(0, 0) distribution.Let have a Pa(00,a) distribution. Show that the posterior distribution of 0 given x is Pa(max{00, X1,, X}, a + n).
Suppose that X = (X₁,..., Xn) is a random sample from an exponential distribution. Thus X~ (6) (independently). Suppose also that the prior distribution of is JG(a, B). Show that the posterior distribution of given x is(n+a[( x) + ']")
Suppose that X is B(n, 0). Suppose also that has a Be(a, B) prior distribution.Show that the posterior distribution of given x is Be( a + x, B +n-x). What is the natural conjugate family for the binomial distribution?
A scientific journal, in an attempt to maintain experimental standards, insists that all reported statistical results have (classical) error probability of ao (or better). To consider a very simple model of this situation, assume that all statistical tests conducted are of the form Ho: 0 = 00
(DeGroot (1970)) Suppose that, with probability to, a signal is present in a certain system at any given time, and that, with probability 1o, no signal is present.A measurement made on the system when a signal is present is normally distributed with mean 50 and variance 1, and a measurement made on
There are three coins in a box. One is a two-headed coin, another is a two-tailed coin, and the third is a fair coin. When one of the three coins is selected at random and flipped, it shows heads. What is the probability that it is the two-headed coin?
Prove Lemma 1.
We will observe, for i = 1,..., p, independent X,~N(0, 900), where 0, is the unknown mean yield per acre of corn hybrid i. It is felt that the 0, are similar, to be modelled as being i.i.d. observations from a common population. The common mean of the 0; is believed to be about 100, the standard
Suppose that X₁, X₂,.. is a series of random variables which assume only the values 0 and 1. Let m((x,., xn)) be the joint distribution of the first n random X.(a) If m((x1,..., x)) = 2" for all n and x" = (x,..., x), find a representation of the form (3.28).(b) Show that there exists an m for
g1(x)= (6) -1/2e12/6 or g2(x) 0.5eis closer to f(x) = (2п)-1/2e-x2/2 A test of Ho: 0 = 1 versus H₁: 0 = 2 is to be conducted, where is the parameter of a U(0, 0) distribution. (Assume can only be 1 or 2.) It is desired to estimatethe prior probability that 0 = 1, i.e., п(1) =1- п (2). There
In terms of the distance measure d(f, g) given in Subsection 3.5.6, which of the two densities (on R¹),
Using the moment approach, find estimates of the hyperparameters in the situation of(a) Exercise 25(a).(b) Exercise 25(b).
Using the moment approach in the situation of Exercise 23, show that estimates of the hyperparameters, a and B, are (when 0
Let X, N(0, 1) and X-N(0, 1) be independent. Suppose 0, and 2 are i.i.d. from the prior w. Find the ML-II prior, over the class of all priors, (a) when x, 0 and x = 1; (b) when x, 0 and x-4. (It is a fact, which you may assume to be true, that o gives probability to at most two points, and that wo
Suppose that X-N(0, 1), o is a N(0, 2.19) prior, and I is as in (3.17) with E=0.2. (a) If 2 is the class of all distributions, find the ML-II prior for any given x. (b) If 2 {g: q is N(0, r), rz1), find the ML-II prior for any given x.
Let X, N(0, 1) and X-N(0, 1) be independent. Suppose 0, and are i.i.d. from the prior w. Find the ML-II prior, over the class of all priors, (a) when x, 0 and x = 1; (b) when x, 0 and x-4.
Suppose that X-N(0, 1), o is a N(0, 2.19) prior, and I is as in (3.17) with F=0.2. (a) If 2 is the class of all distributions, find the ML-II prior for any given x. (b) If 2 {q: q is N(0, 2), r221), find the ML-II prior for any given x.
In Exercise 23, suppose that p = 3, x, = 3, x =0, and x, 5. Find the ML-II prior.
Suppose, for i=1,..., p, that X-N(8,, ), and that the X, are independent. (a) Find the ML-II prior in the situation of Exercise 24(a), for any given x. (b) Find the ML-II prior in the situation of Exercise 24(b), for any given x. =
Suppose, for i=1,..., p. that 8,=,+,, where the , are i.i.d. (0,0), unknown. (a) If , is, where is in (0, 1) but is otherwise unknown, describe the implied class, I, of priors for =(0,, 0)'. (b) If the for all i, and is known to have a A(1,1) distribution (independent of the e,), show that the
Suppose that X,,..., X, are independent, and that X-P(6,), i=1,..., p. If the 8, are i.i.d. (a, ), find the marginal density, m, for X(X...., X)'.
Suppose X, the failure time of an electronic component, has density (on (0, 0)) f(x0) 0 expl-x/8). The unknown 8 has an (1, 0.01) prior distribution. Calculate the (marginal) probability that the component fails before time 200.
Suppose X B(n, 0) and that has a Be(a, ) prior distribution. (a) Find m(x). (b) Show that, if m(x) is constant, then it must be the case that a = B=1. (Stigler (1982) reports that this was the motivation for the use of a uniform prior in Bayes (1763).)
Assume is from a location density. It is believed that -K < < K, and that has prior mean 0. Show that the prior distribution which maximizes entropy, subject to these constraints, is given by (0)= -K,K)(0), 2K sinh(2) where z is the solution to the equation (K-"z+1)tanh(z)-z=0. (Sinh and tanh stand
Assume a scale parameter 8 is to be estimated (so that the natural noninformative prior is ). It is believed that a
Assume X N(6, 1) is to be observed, but that it is known that >0. It is further believed that 8 has a prior distribution with mean . Show that the prior density of which maximizes entropy, subject to these constraints, is the 8() density.
Consider the situation of Example 15 in Subsection 1.6.4. Assume that use of different priors, with the likelihood 8'(1-8), would result in different answers. Show that use of the Jeffreys noninformative prior can violate the Likelihood Principle (see Exercise 12(b), (c)).
Suppose X-N2(0, 1) and that it is known that 0,306,. Find a reasonable noninformative prior for 6.
In Example 7, verify that (0, 0) is also the noninformative prior that would result from an invariance-under-transformation argument which con- sidered the transformed problem defined by Y=cX+b, n=c0+b, and { = co (be R' and e>0).
Suppose that, for i=1,..., p, X, f(x, 18) and (e) is the Jeffreys noninfor- mative prior for 8,. If the X, are independent, show that the Jeffreys noninforma- tive prior for 0 (0, 0)' is (0) II-1,(,).
Determine the Jeffreys noninformative prior for the unknown vector of param- eters in each of the following distributions: (a) M(n, p) (n given); (b) (a, ) (both a and unknown).
Determine the Jeffreys noninformative prior for the unknown parameter in each of the following distributions: (a) P(0);
In the situation of Example 6, characterize the relatively scale invariant priors. (Mimic the reasoning in the discussion of relatively location invariant priors.)
In the "table entry" problem of Example 6, verify the following statement: As a-0 or b or both, the p, will converge to the values [log(1+i)/log 10].
By an "invariance under reformulation" argument, show that a reasonable noninformative prior for a Poisson parameter is (0) 8. (Hint: A Poisson random variable X usually arises as the number of occurrences of a (rare) event in a time interval T. The parameter 8 is the average number of occurrences
For each of the following densities, state whether the density is a location, scale, or location-scale density, and give the natural noninformative prior for the unknown parameters: (a) (8-1, 0+1), (b) (0, B), (c) (a,p,o) (a fixed), (d) Pa(x,a) (a fixed).
For each planet in our solar system, determine (i) the first and third quartiles of your prior distribution for the (average) distance of the planet from the sun (i.e., specify a "central" interval which you think has a 50% chance of containing the distance); (ii) specify a "central" 90% interval
Let 8 denote the unemployment rate next year. Determine your subjective probability density for 8. Can it be matched with a Bela, B) density?
Repeat Exercise 4(b) and (c), but with "normal distribution" replaced by "Cauchy distribution." Note: If X-(0, B), then P(0
Consider the situation of Exercise 2. (a) Determine the - and -fractiles of your prior density for 0. (b) Find the normal density matching these fractiles. (c) Find, subjectively, the and -fractiles of your prior distribution for 8, (Do not use the normal distribution from (b) to obtain these.) Are
Using the relative likelihood approach, determine your prior density for in the situation of Exercise 2.
Let e denote the highest temperature that will occur outdoors tomorrow, near your place of residence. Using the histogram approach, find your subjective prior density for 8,
Automobiles are classified as economy size (small), midsize (medium), or full size (large). Decide subjectively what proportion of each type of car occurs in your area.
Consider the regression setup where Y=b'0+E, b (b, bp) being a vector of regressor variables, 0=(0,,....,)' being a vector of unknown regression coefficients, and a being a N(0, 2) random error (known, for simplicity). Some data, X, is available to estimate 0; let 6(x) denote the estimator. The
In Example 4 it is a fact (see Chapter 4) that, after observing x, and x, the believed distribution, *, of 8 to a Bayesian (who initially gave each 8 probability density ()>0) will be of the following form: If xx, then gives probability one to the point (x,+x2); if x, x2, then * gives probabilities
In Example 4, verify that Re (8,a) and Rc (8, a)=.
Suppose we restrict consideration to the class of unbiased estimators for # (i.e., estimators for which E,6(X) = 0 for all 0). Give a decision-theoretic formulation of the "minimum variance unbiased estimator" criterion.
A chemical in an industrial process must be at least 99% pure, or the process will produce a faulty product. Letting & denote the percent purity of the chemical, the loss in running the process with #
Let be the (unknown) total number of cars that will be sold in a given year, and let a be the total number that will be produced. Each car that is sold results in a profit of $500, while each car that is not sold results in a loss of $1000. Assuming a linear utility, show that the regret loss for
Suppose 0=(0,..., 6,)' is unknown, and a = (a,,..., a,)' has utility g(0-a, Y), where all second partial derivatives of g exist and Y is an unknown random variable. For a close to 0, heuristically show that the implied loss for decision making can be considered to be a shifted quadratic loss of the
(a) Prove that a decision rule, which is admissible under squared-error loss, is also admissible under a weighted squared-error loss where the weight, w(0), is greater than zero for all 8 . (b) Show that the Bayes action can be different for a weighted squared-error loss, than for squared-error
An automobile company is about to introduce a new type of car into the market. It must decide how many of these new cars to produce. Let a denote the number of cars decided upon. A market survey will be conducted, with information?
For which n in Example 3 is the expected utility positive?
Consider the gambling game described in the St. Petersburg paradox, and assume that the utility function, U(x), for a change in fortune, x, is bounded, monotoni- cally increasing on R', and satisfies U(0)=0. Show that the utility of playing the game is negative for a large enough cost c.
An investor has $1000 to invest in speculative stocks. He is considering investing m dollars in stock A and (1000-m) dollars in stock B. An investment in stock A has a 0.6 chance of doubling in value, and a 0.4 chance of being lost. An investment in stock B has a 0.7 chance of doubling in value,
A person is given a stake of m>0 dollars, which he can allocate between an event A of fixed probability a(0 < a
Mr. Rubin has determined that his utility function for a change in fortune on the interval-100 rs 500 is U(r) (0.62) log[(0.004)r+1]. (a) He is offered a choice between $100 and the chance to participate in a gamble wherein he wins $0 with probability and $500 with probability !. Which should he
Showing 300 - 400
of 884
1
2
3
4
5
6
7
8
9
Step by Step Answers