New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
mathematics
statistics
Statistical Inference 2nd edition George Casella, Roger L. Berger - Solutions
(a) For the hierarchy in Example 4.4.6, show that the variance of X can be writtenVar X = nEP(l - BP) + n(n - 1) Var P.(The first term reflects binomial variation with success probability EP, and the second term is often called "extra-binomial" variation, showing how the hierarchical model has a
A generalization of the hierarchy in Exercise 4.34 is described by D. G. Morrison (1978), who gives a model for forced binary choices. A forced binary choice occurs when a person is forced to choose between two alternatives, as in a taste test. It may be that a person cannot actually discriminate
(The gamma as a mixture of exponentials) Gleser (1989) shows that, in certain cases, the gamma distribution can be written as a scale mixture of exponentials, an identity suggested by different analyses of the same data. Let f{x) be a gamma(r, λ) pdf.(a) Show that if rwhere (Make a
Let (X1,..., Xn) have a multinomial distribution with m trials and cell probabilities p1,... ,pn (see Definition 4.6.2). Show that, for every i and j,
A pdf is defined by(a) Find the value of C. (b) Find the marginal distribution of X. (c) Find the joint cdf of X and Y. (d) Find the pdf of the random variable Z = 9/(X + l)2.
Let X and Y be independent random variables with means μx, μy and variances σ2X σ2Y. Find an expression for the correlation of XY and Y in terms of these means and variances.
Let X1, X2, and X3 be uncorrelated random variables, each with mean μ and variance σ2. Find, in terms of μ and σ2, Cov(X1 + X2, X2 + X3) and Cov(X1 + X2, X1 - X2).
Prove the following generalization of Theorem 4.5.6: For any random vector (X1,. . . . . . ., Xn)
Show that if (X, Y) ~ bivariate normal(μx, μy, Ï2X, Ï2Y, p), then the following are true.(a) The marginal distribution of X is n(μx, Ï2x) and the marginal distribution of Y is n(μY, Ï2Y).(b) The
Let Z1 and Z2 be independent n(0,1) random variables, and define new random variables X and Y bywhere aX, bX, cX, aY, bY, and cy are constants. (a) Show that (b) If we define the constants aX, bX, cX, aY, bY, and cY by where μX, μY, Ï2X, Ï2Y, and p
Let X and Y be independent n(0,1) random variables, and define a new random variable Z by(a) Show that Z has a normal distribution. (b) Show that the joint distribution of Z and Y is not bivariate normal. (Show that Z and Y always have the same sign.)
where 0 (a) Show that the marginal distributions tire given by fx(x) = af1(x) + (1 - a)f2(x) and fγ(x) = ag1(y) + (1 - a)g2(y).(b) Show that X and Y are independent if and only if |f1(x) - f2(x)][g1 (y) - g2{y)] = 0.
Let X, Y, and Z be independent uniform(0,1) random variables. (a) Find P(X/Y < t) and P(XY < t). (Pictures will help.) (b) Find P(XY/Z < t).
Let A, B, and C be independent random variables, uniformly distributed on (0, 1). What is the probability that Ax2 + Bx + C has real roots? (If X ~ uniform(0,1), then - log X ~ exponential. The sum of two independent exponentials is gamma.)
Find the pdf of IIni=1Xi, where the Xi,s are independent uniform(0,1) random variables. (Try to calculate the cdf, and remember the relationship between uniforms and exponentials.)
A parallel system is one that functions as long as at least one component of it functions. A particular parallel system is composed of three independent components, each of which has a lifelength with an exponential(λ) distribution. The lifetime of the system is the maximum of the individual
Refer to Miscellanea 4.9.2.(a) Show that Ai is the arithmetic mean, A-1 is the harmonic mean, and A0 = limr0 Ar is the geometric mean.(b) The arithmetic-geometric-harmonic mean inequality will follow if it can be established that Ar is a nondecreasing function of r over the range
For any three random variables X, Y, and Z with finite variances, prove (in the sprit of Theorem 4.4.7) the covariance identity Cov(X, Y) = E(Cov(X, Y|Z)) + Cov(E(X|Z),E(Y|Z)), where Cov(X, Y|Z) is the covariance of X and Y under the pdf f(x, y|z).
A and B agree to meet at a certain place between 1 PM and 2 PM. Suppose they arrive at the meeting place independently and randomly during the hour. Find the distribution of the length of time that A waits for B. (If B arrives before A, define A's waiting time as 0.)
DeGroot (1986) gives the following example of the Borel Paradox (Miscellanea 4.9.3): Suppose that X1 and X2 are iid exponential(l) random variables, and define Z = (X2 - 1)/X1. The probability-zero sets (Z = 0} and {X2 = 1} seem to be giving us the same information but lead to different conditional
A random variable X is defined by Z = log A, where EZ = 0. Is EX greater than, less than, or equal to 1?
This exercise involves a well-known inequality known as the triangle inequality (a special case of Minkowski's Inequality). (a) Prove (without using Minkowski's Inequality) that for any numbers a and b |a + 6| < |a| + |6|. (b) Use part (a) to establish that for any random variables X and Y with
Prove the Covariance Inequality by generalizing the argument given in the text immediately preceding the inequality.
A woman leaves for work between 8 AM and 8:30 AM and takes between 40 and 50 minutes to get there. Let the random variable X denote her time of departure, and the random variable Y the travel time. Assuming that these variables are independent and uniformly distributed, find the probability that
Prove that if the joint cdf of X and Y satisfies Fx,y(x,y) = FX(X)FY(y), then for any pair of intervals (a, b), and (c, d), P(a < X < b, c < Y < d) = P(a < X < b)P(c < Y < d).
Color blindness appears in 1% of the people in a certain population. How large must a sample be if the probability of its containing a color-blind person is to be .95 or more? (Assume that the population is large enough to be considered infinite, so that sampling can be considered to be with
Let X1,..., Xn be a random sample from a n(μ, σ2) population. (a) Find expressions for θ1,...,θ4, as defined in Exercise 5.8, in terms of μ and σ2. (b) Use the results of Exercise 5.8, together with the results of part (a), to calculate Var σ2. (c) Calculate Var S2 a completely different
Suppose X- and S2 are calculated from a random sample X1,...,Xn drawn from a population with finite variance σ2. We know that ES2 = σ2. Prove that ES ≤ σ, and if σ2 > 0, then ES < σ.
Let X1,... ,Xn be iid n(μ,σ2). Find a function of S2, the sample variance, say g(S2), that satisfies Eg(S2) = σ. (Hint: Try g(S2) = c√S2, where c is a constant.)
Establish the following recursion relations for means and variances. Let X-n and S2n be the mean and variance, respectively, of X1,...,Xn. Then suppose another observation, Xn+i, becomes available. Show that(a) X-n+l (b) nS2n+1 = (n - 1)S2n + (n/n+1) {Xn+1 - x-n)2
Let Xi, i = 1,2,3, be independent with n(i, i2) distributions. For each of the following situations, use the Xis to construct a statistic with the indicated distribution. (a) Chi squared with 3 degrees of freedom (b) t distribution with 2 degrees of freedom (c) F distribution with 1 and 2 degrees
Let X be a random variable with an Fp,q distribution.(a) Derive the pdf of X.(b) Derive the mean and variance of X.(c) Show that 1/X has an Fp,q distribution.(d) Show that (p/q)X/[ 1 + (p/q)X\ has a beta distribution with parameters p/2 and 7/2-
Let X be a random variable with a Student's t distribution with p degrees of freedom.(a) Derive the mean and variance of X.(b) Show that X2 has an F distribution with 1 and p degrees of freedom.(c) Let f(xp) denote the pdf of X. Show thatat each value of xi -oo oo, X converges in distribution to a
(a) Prove that the x2 distribution is stochastically increasing in its degrees of freedom; that is, if p > q, then for any a, P(x2p > a) ≥ P(X2q > a) with strict inequality for some
a. We can see that the t distribution is a mixture of normals using the following argument:where T" is a t random variable with v degrees of freedom. Using the Fundamental Theorem of Calculus and interpreting P{xt = ux) as a pdf, we obtaina scale mixture of normals. Verify this formula by direct
What is the probability that the larger of two continuous iid random variables will exceed the population median? Generalize this result to samples of size n.
Let X and Y be iid n(0,1) random variables, and define Z = min(X, V). Prove that Z2 ~ X2I.
Let Ui,i = 1,2,..., be independent uniform(0,1) random variables, and let X have distribution P(X = x) = c/x x = l, 2,3,..., X where c = l/(e - 1). Find the distribution of Z = min{U1,... ,Ux}. (That the distribution of Z|X = x is that of the first-order statistic from a sample of size x.)
Let X1,..., Xn be a random sample from a population with pdfLet X(1)
As a generalization of the previous exercise, let X1,... ,Xn be iid with pdfLet X(1)
Let X1,...,Xn be iid with pdf fx(x) and cdf Fx(x), and let X(i1)<.....< X(i1) be the order statistics.(a) Find an expression for the conditional pdf of X(i) given X(j) in terms of fx and Fx.(b) Find the pdf of V|R = r, where V and R are defined in Example 5.4.7.
A manufacturer of booklets packages them in boxes of 100. It is known that, on the average, the booklets weigh 1 ounce, with a standard deviation of .05 ounce. The manufacturer is interested in calculating P(100 booklets weigh more than 100.4 ounces), a number that would help detect whether too
Let X1,..., Xn be iid random variables with continuous cdf Fx, and suppose EXi = p. Define the random variables Y1,..., Yn byFind the distribution of ni = 1 Yi.
If Xi and Xn are the means of two independent samples of size n from a population with variance σ2, find a value for n so that P{\X1 - Xn| < σ/5) ≈ .99. Justify your calculations.
Suppose X- is the mean of 100 observations from a population with mean μ and variance σ2 = 9. Find limits between which X - μ will lie with probability at least .90. Use both Chebychev's Inequality and the Central Limit Theorem, and comment on each.
Let X1, X2, .....be a sequence of random variables that converges in probability to a constant a. Assume that P(Xi > 0) = 1 for all i. a. Verify that the sequences defined by Yi = √Xi and Yii = a/Xi converge in probability. b. Use the results in part (a) to prove the fact used in Example 5.5.18,
Let Xn be a sequence of random variables that converges in distribution to a random variable X. Let Yn be a sequence of random variables with the property that for any finite number c,Show that for any finite number c, (This is the type of result used in the discussion of the power properties of
Let Xi,..., Xn be a random sample from a population with mean fi and variance Ï2. Show thatThus, the normalization of X-n in the Central Limit Theorem gives random variables that have the same mean and variance as the limiting n(0,1) distribution.
Stirling's Formula (derived in Exercise 1.28), which gives an approximation for factorials, can be easily derived using the CLT.(a) Argue that, if Xi ~ exponentiall),i = 1,2,..., all independent, then for every x,where Z is a standard normal random variable.(b) Show that differentiating both sides
In Example 5.5.16, a normal approximation to the negative binomial distribution was given. Just as with the normal approximation to the binomial distribution given in Example 3.3.2, the approximation might be improved with a "continuity correction." For XiS defined as in Example 5.5.16, let Vn =
This exercise, and the two following, will look at some of the mathematical details of convergence.(a) Prove Theorem 5.5.4. (Hint: Since h is continuous, given ∈ > 0 we can find a S such that |h(xn) - h(x)\< ∈ whenever |xn - x| < δ. Translate this into probability statements.)(b) In
Prove Theorem 5.5.13; that is, show thata. Set e = |x - μ| and show that if x > μ, then P(Xn ‰¤ x) > P(|Xn - μ| ˆˆ). Deduce the => implication.b. Use the fact that {x : |x - μ >ˆˆ} = {x : x - μ. ˆˆ} to deduce the (See Billingsley 1995, Section 25, for a
Fill in the details in the proof of Theorem 5.5.24.(a) Show that if √n(Yn - μ) → n(0, σ2) in distribution, then Yn → μ in probability.(b) Give the details for the application of Slutsky's Theorem (Theorem 5.5.17).
For the situation of Example 5.6.1, calculate the probability that at least 75% of the components last 150 hours when (a) c = 300, X ~ gamma(a, b), a = 4,b = 5. (b) c = 100, X ~ gamma(a, b), a = 20, b = 5. (c) c = 100, X ~ gamma(a,b), a = 20.7, b = 5. (In parts (a) and (b) it is possible to
Verify the distributions of the random variables in (5.6.5).
Let U ~ uniform(0,1).(a) Show that both - log U and - log(l - U) are exponential random variables.(b) Show that X = log u/1-u is a logistic(0,1) random variable.(c) Show how to generate a logistic(μ,β) random variable.
Let X1,..., Xn be iid with pdf fx(x), and let X denote the sample mean. Show that fx(x) = nfXl +.....+ xn(nx),
One of the earlier methods (not one of the better ones) of generating pseudo-random standard normal random variables from uniform random variables is to take X = ∑12i= l Ui - 6, where the U,s are iid uniform(0,1).(a) Justify the fact that X is approximately n(0,1).(b) Can you think of any obvious
For each of the distributions in the previous exercise: (a) Generate 1,000 variables from the indicated distribution. (b) Compare the mean, variance, and histogram of the generated random variables with the theoretical values.
Park et al. (1996) describe a method for generating correlated binary variables based on the follow scheme. Let X1,X2,X3 be independent Poisson random variables with mean λ1, λ2, λ3, respectively, and create the random variablesYI = X1 + X3 and Y2 = X2 + X3.(a) Show that Cov(yi,y2) =
Prove that the algorithm of Example 5.6.7 generates a beta (a,b) random variable.
If X has pdf fx(x) and Y, independent of X, has pdf fy(y), establish formulas, similar to (5.2.3), for the random variable Z in each of the following situations. (a) Z = X -Y (b) Z = XY (c) Z = X/Y
(a) Suppose it is desired to generate Y ~ beta(o, b), where a and b are not integers. Show that using V ~ beta([a], |b]) will result in a finite value of M = suPy, fy (y)j fv (y).(b) Suppose it is desired to generate Y ~ gamma(a, b), where a and b are not integers. Show that using V ~ gamma( [a],
For generating Y ~ n(0,1) using an Accept/Reject Algorithm, we could generate U ~ uniform, V ~ exponential (λ) , and attach a random sign to V (± each with equal probability). What value of A will optimize this algorithm?
A variation of the importance sampling algorithm of Exercise 5.64 can actually produce an approximate sample from f. Again let X ~ f and generate Y1, Y2,..., Ym, iid from g. Calculate qi = [(Yi)/g(Y)]/[∑mj=1 f(Yj)/g(Yj)}. Then generate random variables X* from the discrete distribution on Y1,
In many instances the Metropolis Algorithm is the algorithm of choice because either (i) there are no obvious candidate densities that satisfy the Accept/Reject supremum condition, or (ii) the supremum condition is difficult to verify, or (iii) laziness leads us to substitute computing power for
Show that the pdf fy(y) is a stable point of the Metropolis Algorithm. That is, if Zi ~ fv(y), then Zi + 1 ~ fv(y).
In Example 5.2.10, a partial fraction decomposition is needed to derive the distribution of the sum of two independent Cauchy random variables. This exercise provides the details that are skipped in that example.(a) Find the constants A, B, C, and D that satisfywhere A, B, C, and D may depend on z
Let X1,..., Xn be a random sample, where X- and S2 are calculated in the usual way(a)Assume now that the X,s have a finite fourth moment, and denote 6 = EX*, 02 -E(xi-e1y,j = 2,3,4(b)Show that Var S2 (c) Find Cov(X-, S2) in terms of θ1,... ,θ4. Under what conditions is Cov(X-, S2) = 0?
Establish the Lagrange Identity, that for any numbers a1, a2,..., an and b1,b2,..., bn,Use the identity to show that the correlation coefficient is equal to 1 if and only if all of the sample points lie on a straight line (Wright 1992). (Hint: Establish the identity for n = 2; then induct.)
Let X be one observation from a n(0, σ2) population. Is |X| a sufficient statistic?
Show that the minimal sufficient statistic for the uniform(θ, θ + 1), found in Example 6.2.15, is not complete.
Refer to the pdfs given in Exercise 6.9. For each, let X(1) < . . . < Xn-1) be the ordered sample, and define Yi = X(n) - X(i), i = 1,..., n - 1. a. For each of the pdfs in Exercise 6.9, verify that the set (Y1,..., Yn-1) is ancillary for θ. Try to prove a general theorem, like Example 6.2.18,
A natural ancillary statistic in most problems is the sample size. For example, let N be a random variable taking values 1, 2,... with known probabilities p1, p2, . . ., where ∑pi = 1. Having observed N = n, perform n Bernoulli trials with success probability θ, getting X successes. a. Prove
Suppose X1 and X2 are iid observations from the pdf f(x|α) = axα-1 e-xα, x > 0, α > 0. Show that (logX1)/(logX2) is an ancillary statistic.
Let X1,..., Xn be a random sample from a location family. Show that M - is an ancillary statistic, where M is the sample median.
Let X1,..., Xn be iid n(θ, aθ2), where a is a known constant and θ > 0. a. Show that the parameter space does not contain a two-dimensional open set. b. Show that the statistic T = (, S2) is a sufficient statistic for θ, but the family of distributions is not complete.
Let X1,..., Xn be iid with geometric distribution Pθ(X = x) = θ(1 - θ)x-1, x = 1, 2,..., 0 < θ < 1. Show that ∑Xi is sufficient for θ, and find the family of distributions of ∑Xi. Is the family complete?
Let X1,..., Xn be iid Poisson(λ). Show that the family of distributions of ∑Xi is complete. Prove completeness without using Theorem 6.2.25.
The random variable X takes the values 0, 1, 2 according to one of the following distributions:In each case determine whether the family of distributions of X is complete.
Let X1,..., Xn be independent random variables with densitiesProve that T = mini(Xi/i) is a sufficient statistic for θ.
For each of the following pdfs let X1,...,Xn be iid observations. Find a complete sufficient statistic, or show that one does not exist.a.b.c.d.e.
Let X be one observation from the pdfa. Is X a complete sufficient statistic? b. Is |X| a complete sufficient statistic? c. Does f{x|θ) belong to the exponential class?
Let X1,..., Xn be a random sample from a population with pdf f(x|θ) = θxθ-1 , 0 < x < 1, θ > 0. a. Is ∑Xi sufficient for θ? b. Find a complete sufficient statistic for θ.
Let X1,..., Xn be a random sample from a uniform distribution on the interval (θ, 2θ), θ > 0. Find a minimal sufficient statistic for θ. Is the statistic complete?
Consider the following family of distributions: P = {Pλ(X = x): Pλ(X = x) = λxe-λ / x!; x = 0, 1, 2,...; λ = 0 or 1}. This is a Poisson family with λ restricted to be 0 or 1. Show that the family P is not complete, demonstrating that completeness can be dependent on the range of the parameter.
We have seen a number of theorems concerning sufficiency and related concepts for exponential families. Theorem 5.2.11 gave the distribution of a statistic whose sufficiency is characterized in Theorem 6.2.10 and completeness in Theorem 6.2.25. But if the family is curved, the open set condition of
Let X1,... ,Xn be a random sample from the inverse Gaussian distribution with pdfa. Show that the statistics are sufficient and complete. b. For n = 2, show that has an inverse Gaussian distribution, nλ/T has a X2n-1 distribution, and they are independent. The inverse Gaussian
The concept of minimal sufficiency can be extended beyond parametric families of distributions. Show that if X1,..., Xn are a random sample from a density f that is unknown, then the order statistics are minimal sufficient.
Let X1,... ,Xn be a random sample from the pdfFind a two-dimensional sufficient statistic for (μ, Ï).
Let X1,..., Xn be a random sample from the pdf f(x|μ) = e-(x-μ) where - ∞ < μ < x < ∞. a. Show that X(1) = mini Xi is a complete sufficient statistic. b. Use Basu's Theorem to show that X(1) and S2 are independent.
Boos and Hughes-Oliver (1998) detail a number of instances where application of Basu's Theorem can simplify calculations. Here are a few.a. Let X1,..., Xn be iid n(μ, Ï2), where Ï2 is known.(i) Show that is complete sufficient for μ, and S2 is
Prove the Likelihood Principle Corollary. That is, assuming both the Formal Sufficiency Principle and the Conditionality Principle, prove that if E = (X, θ, {f(x|θ)}) is an experiment, then Ev(E, x) should depend on E and x only through L(θ|x).
Fill in the gaps in the proof of Theorem 6.3.6, Birnbaum's Theorem.a. Define g(t|θ) = g((j, xj)|θ) = f* ((j, xj)|θ) andShow that T(j, xj) is a sufficient statistic in the E* experiment by verifying that g(T(j, xj)|θ)h(j, xj) = g((j,
A risky experimental treatment is to be given to at most three patients. The treatment will be given to one patient. If it is a success, then it will be given to a second. If it is a success, it will be given to a third patient. Model the outcomes for the patients as independent Bernoulli (p)
Joshi and Nabar (1989) examine properties of linear estimators for the parameter in the so-called "Problem of the Nile," where (X, Y) has the joint densityf(x, y|θ) = exp{-(θx + y/θ)}, x > 0, y > 0.a. For an iid sample of size n, show that the Fisher
Measurement equivariance requires the same inference for two equivalent data points: x, measurements expressed in one scale, and y, exactly the same measurements expressed in a different scale. Formal invariance, in the end, leads to a relationship between the inferences at two different data
Prove Theorem 6.2.10.Let X1,..., Xn be iid observations from a pdf or pmf f(x|θ) that belongs to an exponential family given bywhere θ = (θ1, θ2, ... , θd), d ¤ k. Then is a sufficient statistic for θ.
Let X1,... , Xn be iid observations from a location-scale family. Let T1 (X1,... , Xn) and T2(X1, ... , Xn) be two statistics that both satisfy Ti(ax1 + b,..., axn + b) = aTi(xi,... ,xn) for all values of x1,..., xn and b and for any a > 0. a. Show that T1/T2 is an ancillary statistic. b. Let R be
Suppose that for the model in Example 6.4.6, the inference to be made is an estimate of the mean μ. Let T(x) be the estimate used if X = x is observed. If ga(X) = Y = y is observed, then let T*(y) be the estimate of μ + a, the mean of each Yi. If μ + a is estimated by T*(y), then μ would be
Showing 70600 - 70700
of 88243
First
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
Last
Step by Step Answers