New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
mathematics
statistics
Statistical Inference 2nd edition George Casella, Roger L. Berger - Solutions
Suppose that we have two independent random samples: X1,..., Xn are exponential(θ), and Y1,..., Ym are exponential(μ).(a) Find the LRT of H0: θ = μ versus H1: 0 μ.(b) Show that the test in part (a) can be based on the
We have already seen the usefulness of the LRT in dealing with problems with nuisance parameters. We now look at some other nuisance parameter problems.(a) Find the LRT ofH0: θ ¤ 0 versus H1: θ > 0based on a sample X1,..., Xn from a population with probability
A special case of a normal family is one in which the mean and the variance are related, the n(θ, αθ) family. If we are interested in testing this relationship, regardless of the value of θ, we are again faced with a nuisance parameter problem. (a) Find the LRT of H0: a = 1 versus H1: a =≠1
Stefanski (1996) establishes the arithmetic-geometric-harmonic mean inequality (see Example 4.7.8 and Miscellanea 4.9.2) using a proof based on likelihood ratio tests. Suppose that Y1,...,Yn are independent with pdfs λie-λiyi', and we want to test H0: λ1 = . . . = λn vs. H1: λi, are not all
If L(x) and U(x) satisfy Pθ(L(X) ≤ θ) = 1 - α1 and Pθ(U(X) ≥ θ) = 1 - α2, and L(x) ≤ U(x) for all z, show that Pθ(L(X) ≤ θ ≤ U(X)) = 1 - α1 - α2.
If T is a continuous random variable with cdf FT(t|θ) and α1 + α2 = a, show that an a level acceptance region of the hypothesis H0: θ = θ0 is {t: α1 ≤ FT(t|θ0) ≤ 1 - α2}, with associated confidence 1 - α set {θ : α1 ≤ FT(t|θ) < 1 - α2}.
Find a pivotal quantity based on a random sample of size n from a n(θ, θ) population, where θ > 0. Use the pivotal quantity to set up a 1 - α confidence interval for θ.
Let X be a single observation from the beta(θ, 1) pdf. a. Let Y = - (logX)-1. Evaluate the confidence coefficient of the set [y/2, y]. b. Find a pivotal quantity and use it to set up a confidence interval having the same confidence coefficient as the interval in part (a). c. Compare the two
Let X1,...,Xn be iid n(μ, Ï2), where both parameters are unknown. Simultaneous inference on both μ and Ï can be made using the Bonferroni Inequality in a number of ways.a. Using the Bonferroni Inequality, combine the two confidence setsinto one
Solve for the roots of the quadratic equation that defines Fieller's confidence set for the ratio of normal means (see Miscellanea 9.5.3). Find conditions on the random variables for which a. The parabola opens upward (the confidence set is an interval). b. The parabola opens downward (the
Let X1,..., Xn be iid n(μ, σ2), where σ2 is known. For each of the following hypotheses, write out the acceptance region of a level a test and the 1 - α confidence interval that results from inverting the test. a. H0: θ = θ0 versus H1: θ ≠ θ0 b. H0: θ ≥ θ0 versus H1: θ < θ0 c. H0:
Find a 1 - α confidence interval for θ, given X1,..., Xn iid with pdf a. f(x|θ) = 1, θ -1/2 < x < θ + 1/2. b. f(x|θ) = 2x/θ2, 0 < x < θ, θ > 0.
In this exercise we will investigate some more properties of binomial confidence sets and the Sterne (1954) construction in particular. As in Example 9.2.11, we will again consider the binomial(3,p) distribution.a. Draw, as a function of p, a graph of the four probability functions Pp(X
Prove part (b) of Theorem 9.2.12. Theorem 9.2.12. If FT(t|θ) is an increasing function of θ for each t, define θL(t) and θU(t) by FT(t|θU(t)) = 1 - α2, FT(t|θL(t)) = α1. Then the random interval [θL (T), θU(T)] is a 1 - α confidence interval for θ.
In Example 9.2.15 it was shown that a confidence interval for a Poisson parameter can be expressed in terms of chi squared cutoff points. Use a similar technique to show that if X ~ binomial(n, p), then a 1 - α confidence interval for p iswhere Fv1, v2, α is the upper a
a. Let X1,..., Xn be a random sample from a Poisson population with parameter A and define Y = Xi. In Example 9.2.15 a confidence interval for λ was found using the method of Section 9.2.3. Construct another interval for λ by inverting an LRT, and compare the
If X1,... ,Xn are iid with pdf f(x|μ) = e-(x-μ)I[μ,∞) (x), then Y = min{X1,... ,Xn} is sufficient for μ with pdf fY(y|μ) = ne-n(y-μ) I[μ,∞)(y) In Example 9.2.13 a 1 - α confidence interval for μ was found using the method of Section 9.2.3. Compare that interval to 1 - α intervals
a. Let X1,... ,Xn be iid observations from an exponential (λ) pdf, where λ has the conjugate IG(a, b) prior, an inverted gamma with pdfShow how to find a 1 - α Bayes HPD credible set for λ. b. Find a 1 - α Bayes HPD credible set for
Let X1,..., Xn are a sequence of n Bernoulli (p) trials. a. Calculate a 1 - α credible set for p using the conjugate beta(a, b) prior. b. Using the relationship between the beta and F distributions, write the credible set in a form that is comparable to the form of the intervals in Exercise 9.21.
The independent random variables X1,..., Xn have the common distributiona. In Exercise 7.10 the MLEs of α and β were found. If α is a known constant, αo, find an upper confidence limit for β with confidence coefficient .95.b. Use the data of Exercise 7.10 to construct an interval estimate for
Complete the coverage probability calculation needed in Example 9.2.17.a. If X22Y is a chi squared random variable with Y ~ Poisson(A), show that E(x22y) = 2λ, Var(x22Y) = 8λ, the mgf of X2Y is given by exp (- λ + λ/1-2t), andas λ
Let X ~ n(p, 1) and consider the confidence interval Ca(x) = {μ: min{0, (x - a)} ≤ μ. ≤ max{0, (x + a)}}. a. For a = 1.645, prove that the coverage probability of Ca{x) is exactly .95 for all μ, with the exception of μ, = 0, where the coverage probability is 1. b. Now consider the so-called
Suppose that X1,..., Xn is a random sample from a n(μ, σ2) population. a. If σ2 is known, find a minimum value for n to guarantee that a .95 confidence interval for n will have length no more than σ/4. b. If σ2 is unknown, find a minimum value for n to guarantee, with probability .90, that a
Let X1,..., Xn be a random sample from a n((μ, σ2) population. Compare expected lengths of 1 - α confidence intervals for μ that are computed assuming σ2 is known. σ2 is unknown.
Let X1,..., Xn be independent with pdfs fx, (x|θ) = eiθ-x I(iθ,∞)(x). Prove that T = mini(Xi/i) is a sufficient statistic for 6. Based on T, find the 1 - α confidence interval for θ of the form [T + a, T + b] which is of minimum length.
Let X1,...,Xn be iid uniform(0, θ), Let Y be the largest order statistic. Prove that Y/θ is a pivotal quantity and show that the intervalis the shortest 1 - α pivotal interval.
Prove a special case of Theorem 9.3.2. Let X ~ f{x), where / is a symmetric unimodal pdf. For a fixed value of 1 - α, of all intervals [a, b] that satisfy ∫ba f(x)dx= 1 - α, the shortest is obtained by choosing a and b so that ∫a-∞ f(x) dx = a/2 and ∫∞b f(x) dx = α/2.
Let X1,..., Xn be a random sample from a n(0, σ2X), and let Y1,..., Ym be a random sample from a n(0, σ2Y), independent of the As. Define λ = σ2Y/ σ2X. a. Find the level a LRT of H0: λ = λ0 versus H1: λ ≠ λ0. b. Express the rejection region of the LRT of part (a) in terms of an F random
a. Prove the following, which is related to Theorem 9.3.2. Let X ~ f(x), where f is a strictly decreasing pdf on [0, ∞). For a fixed value of 1 - α, of all intervals [a, b] that satisfy ∫ba f(x) dx = 1 - α, the shortest is obtained by choosing α = 0 and b so that ∫ba f (x) dx = 1 - α. b.
Juola (1993) makes the following observation. If we have a pivot Q(X, $), a 1 - α confidence interval involves finding a and b so that P(aor, more generally, a. Prove that the solution is C = {t: g(t) b. Apply the result in part (a) to get the shortest intervals in Exercises 9.37 and
Let X1,..., Xn be iid exponential A).a. Find a UMP size a hypothesis test of H0: λ = λ0 versus H1: λ b. Find a UMA 1 - α confidence interval based on inverting the test in part (a). Show that the interval can be expressed asc. Find the expected
Show that if λ(θ0) is an unbiased level α acceptance region of a test of H0: θ = θ0 versus H1: θ ≠ θ0 and C(x) is the 1 - α confidence set formed by inverting the acceptance regions, then C(x) is an unbiased 1 - α confidence set.
Let X1,..., Xn be a random sample from a n(θ, σ2) population, where σ2 is known. Show that the usual one-sided 1 - α upper confidence bound {θ: θ < + zασ /√n} is unbiased, and so is the corresponding lower confidence bound.
Let X1...,Xn be a random sample from a n(θ, σ2) population, where σ2 is unknown. a. Show that the interval θ < + tn-1,α/√n can be derived by inverting the acceptance region of an LRT. b. Show that the corresponding two-sided interval in (9.2.14) can also derived by inverting the acceptance
We are to testH0: θ = θ0 versus H1: θ > θ0,where θ is the mean of one of two normal distributions and θ0 is a fixed but arbitrary value of θ. We observe the random variable X with distributiona. Show that the
In Example 9.2.5 a lower confidence bound was put on p, the success probability from a sequence of Bernoulli trials. This exercise will derive an upper confidence bound. That is, observing X1,..., Xn, where Xi ~ Bernoulli(p), we want an interval of the form [0, U(x1 ... ,xn,)), where Pp[p ∈
If X1..., Xn are iid from a location pdf f (x - θ), show that the confidence set C(x1, ... xn) = {θ: - k1 ≤ θ ≤ + k2}, where k1 and k2 sire constants, has constant coverage probability.
Let X1,..., Xn be a random sample from a n(μ, Ï2) population, where both μ and Ï2 are unknown. Each of the following methods of finding confidence intervals for Ï2 results in intervals of the formbut in each case a and b will satisfy
Let X ~ n(μ, Ï2), Ï2 known. For each c ¥ 0, define an interval estimator for μ by C(x) = [x - cÏ, x + cÏ] and consider the loss in (9.3.4).a. Show that the risk function, R(μ, C), is given
The decision theoretic approach to set estimation can be quite useful (see Exercise 9.56) but it can also give some unsettling results, showing the need for thoughtful implementation. Consider again the case of X ~ n(μ, Ï2), Ï2 unknown, and suppose that we have
Let X1,..., Xn be iid n(μ, σ2), where σ2 is known. We know that a 1 - α confidence interval for μ is ± zα/2 σ/√n. a. Show that a 1 - α prediction interval for Xn+1 is ± zα/2 σ √1 + 1/n. b. Show that a 1 - α tolerance interval for 100p% of the underlying population is given by
a. Derive a confidence interval for a binomial p by inverting the LRT of H0: p = p0 versus H1: p ≠ p0. b. Show that the interval is a highest density region from pv (1 - p)n-y and is not equal to the interval in (10.4.4).
a. Find the 1 - α confidence set for a that is obtained by inverting the LRT of H0: α = a0 versus H1: a ≠ a0 based on a sample X1,..., Xn from a n(θ, aθ) family, where θ is unknown. b. A similar question can be asked about the related family, the n(θ, aθ2) family. If X1,..., Xn are iid
Show that each of the three quantities listed in Example 9.2.7 is a pivot.Example 9.2.7
A random sample X1,..., Xn is drawn from a population with pdfFind a consistent estimator of θ and show that it is consistent.
This problem will look at some details and extensions of the calculation in Example 10.1.18.(a) Reproduce Figure 10.1.1, calculating the ARE for known β. (You can follow the calculations in Example A.0.7, or do your own programming.)(b) Verify that the ARE(,) comparison is the same
Refer to Example 10.1.19.(a) Verify that the bootstrap mean and variance of the sample 2, 4, 9, 12 are 6.75 and 3.94, respectively.(b) Verify that 6.75 is the mean of the original sample.(c) Verify that, when we divide by n instead of n - 1, the bootstrap variance of the mean, and the usual
Use the Law of Large Numbers to show that Var*B(θ) of (10.1.11) converges to Var* (θ) of (10.1.10) as B → ∞.
Efron (1982) analyzes data on law school admission, with the object being to examine the correlation between the LSAT (Law School Admission Test) score and the first-year GPA (grade point average). For each of 15 law schools, we have the pair of data points (average LSAT, average GPA):(a) Calculate
Another way in which underlying assumptions can be violated is if there is correlation in the sampling, which can seriously affect the properties of the sample mean. Suppose we introduce correlation in the case discussed in Exercise 10.2.1; that is, we observe X1,..., Xn, where Xi ~
The breakdown performance of the mean and the median continues with their scale estimate counterparts. For a sample X1,..., Xn: (a) Show that the breakdown value of the sample variance S2 = ∑(Xi) - )2/(n - 1) is 0. (b) A robust alternative is the median absolute deviation, or MAD, the median of
In this exercise we will further explore the ARE of the median to the mean, ARE(Mn, ).(a) Verify the three AREs given in Example 10.2.4.(b) Show that ARE(Mn, ) is unaffected by scale changes. That is, it doesn't matter whether the underlying pdf is f(x) or (1/Ï)f(x/Ï).(c)
If f(x) is a pdf symmetric around 0 and p is a symmetric function, show that ∫
Consider the situation of Example 10.6.2.(a) Verify that IF(, x) = x - μ.(b) For the median we have T(F) = m if P(X ¤ m) = 1/2 or m = F-1(1/2). If X ~ F6, show thatand thus (c) Show that and complete the argument to calculate IF(M, x).
From (10.2.9) we know that an M-estimator can never be more efficient than a maximum likelihood estimator. However, we also know when it can be as efficient. (a) Show that (10.2.9) is an equality if we choose
A random sample X1,..., Xn is drawn from a population that is n(θ, θ), where θ > 0.(a) Show that the MLE of θ, θ, is a root of the quadratic equation θ2 + θ - W = 0, where
Binomial data gathered from more than one population are often presented in a contingency table. For the case of two populations, the table might look like this:where Population 1 is binomial(n1, p1), with S1 successes and F1 failures, and Population 2 is binomial(n2, p2), with S2 successes and F2
(a) Let (X1,... , Xn) ~ multinomial (m, pi,..., pn). Consider testing H0: p1 = p2 versus H1: p1 p2. A test that is often used, called McNemar's Test, rejects H0 ifShow that this test statistic has the form (as in Exercise 10.31) where the XiS are the observed cell frequencies and the
Fill in the gap in Theorem 10.3.1. Use Theorem 10.1.12 and Slutsky's Theorem (Theorem 5.5.17) to show that (θ - θ) / √-l′′(θ|x) → n(0, 1), and therefore -2log λ(X) → X21.
Let X1,..., Xn be a random sample from a n(μ, σ2) population.(a) If μ is unknown and σ2 is known, show that Z = √n( - μ0)/σ is a Wald statistic for testing H0: μ = μ0.(b) If σ2 is unknown and μ is known, find a Wald statistic for testing H0: σ = σ0.
Let X1,..., Xn be a random sample from a n(μ, σ2) population. (a) If μ is unknown and σ2 is known, show that Z = √n( - μ0)/σ is a score statistic for testing H0: μ = μ0. (b) If σ2 is unknown and μ is known, find a score statistic for testing H0: σ = σ0.
Expand the comparisons made in Example 10.3.7. (a) Another test based on Huber's M-estimator would be one that used a variance estimate, based on (10.3.6). Examine the performance of such a test statistic, and comment on its desirability (or lack of) as an alternative to either (10.3.8) or
A variation of the model in Exercise 7.19 is to let the random variables Y1,..., Yn satisfy Yi = βXi + εi, i = 1,... , n, where X1,... , Xn are independent n(μ, τ2) random variables, εi,..., εn are iid n(0, σ2), and the Xs and εs are independent. Exact variance calculations become quite
Let X1,..., Xn be iid negative binomial(r, p). We want to construct some approximate confidence intervals for the negative binomial parameters. (a) Calculate Wilks' approximation (10.4.3) and show how to form confidence intervals with this expression. (b) Find an approximate 1 - α confidence
In Example 10.4.7, two modifications were made to the Wald interval. (a) At y = 0 the upper interval endpoint was changed to 1 - (α/2)1/n, and at y = n the lower interval endpoint was changed to (α/2)1/n. Justify the choice of these endpoints. (b) The second modification was to truncate all
Solve for the endpoints of the approximate binomial confidence interval, with continuity correction, given in Example 10.4.6. Show that this interval is wider than the corresponding interval without continuity correction, and that the continuity corrected interval has a uniformly higher coverage
Let X1,..., Xn be iid negative binomial(r, p).(a) Complete the details of Example 10.4.9; that is, show that for small p, the intervalis an approximate 1 - α confidence interval. (b) Show how to choose the endpoints in order to obtain a minimum length 1 - α interval.
For the situation of Example 10.1.8 show that for Tn = √n/n: (a) Var (Tn) = ∞. (b) If μ ≠ 0 and we delete the interval (-δ, δ) from the sample space, then Var (Tn) < ∞. (c) If μ ≠ 0, the probability content of the interval (-δ, δ) approaches 0.
Suppose that X1,..., Xn are iid Poisson(λ). Find the best unbiased estimator of (a) e-λ, the probability that X = 0. (b) λe-λ, the probability that X = 1. (c) For the best unbiased estimators of parts (a) and (b), calculate the asymptotic relative efficiency with respect to the MLE. Which
An ANOVA variance-stabilizing transformation stabilizes variances in the following approximate way. Let Y have mean θ and variance v(θ). (a) Use arguments as in Section 10.1.3 to show that a one-term Taylor series approximation of the variance of g(y) is given by Var (p(Y)) =
Suppose we have a oneway ANOVA with five treatments. Denote the treatment means by θ1,..., θ5, where θ1 is a control and θ2, ..., θ5 are alternative new treatments, and assume that an equal number of observations per treatment is
Suppose that we have a oneway ANOVA with equal numbers of observations on each treatment, that is, ni = n, i = 1,..., k. In this case the F test can be considered an average t test.(a) Show that a t test of H0: θi = θi² versus H1: θi
Under the oneway ANOVA assumptions, show that the likelihood ratio test of H0: θ1 = θ2 = ... = 0k is given by the F test of (11.2.14).
The Scheffe simultaneous interval procedure actually works for all linear combinations, not just contrasts. Show that under the oneway ANOVA assumptions, if M = kFk,N-k,a (note the change in the numerator degrees of freedom), then the probability is 1 - α
(a) Show that for the t and F distributions, for any v, α, and k, tv,α/2 ≤ √(k - 1)Fk-l,v,α. (Recall the relationship between the t and the F. This inequality is a consequence of the fact that the distributions kFk,v are stochastically increasing in k for fixed v but is actually a weaker
In Theorem 11.2.5 we saw that the ANOVA null is equivalent to all contrasts being 0. We can also write the ANOVA null as the intersection over another set of hypotheses. (a) Show that the hypotheses H0: θ1 = θ2 = ... = θk versus H1: θi, ≠ θj for some i, j and the hypotheses H0: θi - θj = 0
A multiple comparison procedure called the Protected LSD (Protected Least Significant Difference) is performed as follows. If the ANOVA F test rejects H0 at level α, then for each pair of means θi and θi², declare the means different ifNote that
Demonstrate that "data snooping," that is, testing hypotheses that are suggested by the data, is generally not a good practice. (a) Show that, for any random variable Y and constants a and b with a > b and P(Y > b) < 1, P(Y > a | Y > b) > P(Y > a). (b) Apply the inequality in part (a) to the size
Let Xi ~ gamma(λi, 1) independently for i = 1,..., n. Definei = 1,..., n - 1, and (a) Find the joint and marginal distributions of Yi, i = 1,..., n. (b) Connect your results to any distributions that are commonly employed in the ANOVA.
Verify that the following transformations are approximately variance-stabilizing in the sense of Exercise 11.1. (a) Y ~ Poisson, g*(y) = √y (b) y ~ binomial(n, p), g*(y) = sin-1(√y/n) (c) y has variance v(θ) = Kθ2 for some constant K, g*(y) = log(y).
(a) Illustrate the partitioning of the sums of squares in the ANOVA by calculating the complete ANOVA table for the following data. To determine diet quality, male weanling rats were fed diets with various protein levels. Each of 15 rats was randomly assigned to one of three diets, and their weight
Use the model in Miscellanea 11.5.3. (a) Show that the mean and variance of Yij are EYij = μ + τi and Var Yij = σ2B + σ2. (b) If ∑ai = 0, show that the unconditional variance of ∑ai i. is Var (∑aii.) = 1/r(σ2 + σ2B)(l - p) ∑a2i, where p = intraclass correlation.
In Section 11.3.1, we found the least squares estimators of α and β by a two-stage minimization. This minimization can also be done using partial derivatives.(a) Compute and RSS/c RSS/d and set them equal to 0. Show that the
Observations (xi, Yi), i = 1,... ,n, follow the model Yi = α + βxi + εi, where E εi = 0, Var εi = σ2, and Cov(εi, εj) = 0 if i ≠ j. Find the best linear unbiased estimator of α.
Show that in the conditional normal model for simple linear regression, the MLE of Ï2 is given by
Consider the residuals 1,..., n defined in Section 11.3.4 by i = Yi - - xi.(a) Show that Ei = 0.(b) Verify that
The Box-Cox family of power transformations (Box and Cox 1964) is defined bywhere λ is a free parameter. (a) Show that, for each y, g*λ(y) is continuous in A. In particular, show that (b) Find the function v(θ), the approximate variance of Y, that
Fill in the details about the distribution of d left out of the proof of Theorem 11.3.3.(a) Show that the estimator = - can be expressed aswhere (b) Verify that (c) Verify that
Verify the claim in Theorem 11.3.3, that i is uncorrelated with and . (Show that i = ∑ejYj, where the ejs are given by (11.3.30). Then, using the facts that we can write = ∑cjYj and = ∑djYj, verify that ∑ejcj = ∑ejdj = 0 and apply Lemma 11.3.2.)
Observations (xi, Yi), i = 1,..., n, are made according to the model Yi = α + βxi + εi, where x1,...,xn are fixed constants and ε1,...,εn are iid n(0, σ2). The model is then reparameterized as Yi = α′ + β′(xi - ) + εi. Let and denote the MLEs of α and β, respectively, and ′
Observations (Xi, Yi), i = 1,...,n, are made from a bivariate normal population with parameters (μX, μY, Ï2X, Ï2Y, p), and the model Yi = α + βxi + εi is going to be fit.(a) Argue that the hypothesis H0:
(a) Illustrate the partitioning of the sum of squares for simple linear regression by calculating the regression ANOVA table for the following data. Parents are often interested in predicting the eventual heights of their children. The following is a portion of the data taken from a study that
Observations Y1,...,Yn are described by the relationship Yi = θx2i + εi, where x1,... ,xn are fixed constants and ε1,..., εn are iid n(0, σ2). (a) Find the least squares estimator of θ. (b) Find the MLE of θ. (c) Find the best unbiased estimator of θ.
Observations Y1,... ,Yn are made according to the model Yi = α + βxi + εi, where x1,...,xn are fixed constants and ε1,...,εn are iid n(0, σ2). Let and denote MLEs of α and β. (a) Assume that x1,...,xn are observed values of iid random variables X1,...,Xn with distribution n(μX, σ2X).
We observe random variables Y1,...,Yn that are mutually independent, each with a normal distribution with variance σ2. Furthermore, EYi = βxi, where β is an unknown parameter and x1,... ,xn are fixed constants not all equal to 0. Find the MLE of β. Compute its mean and variance.
An ecologist takes data (xi, yi), i = 1,...,n, where xi is the size of an area and Yi is the number of moss plants in the area. We model the data by Yi ~ Poisson(θxi), Yis independent. (a) Show that the least squares estimator of θ is ∑xiYi / ∑x2i. Show that this estimator has variance
Verify that the simultaneous confidence intervals in (11.3.42) have the claimed coverage probability.
In the discussion in Example 12.4.2, note that there was one observation from the potoroo data that had a missing value. Suppose that on the 24th animal it was observed that O2 = 16.3. (a) Write down the observed data and expected complete data log likelihood functions. (b) Describe the E step and
Suppose that random variables Yij are observed according to the overparameterized oneway ANOVA model in (11.2.2). Show that, without some restriction on the parameters, this model is not identifiable by exhibiting two distinct collections of parameters that lead to exactly the same distribution of
Under the oneway ANOVA assumptions:(a) Show that the set of statistics (1., 2.,..., k., S2p) is sufficient for (θ1,θ2, ..., θk, Ï2).(b) Show thatis independent of each i., i = 1, ..., k. (See Lemma 5.3.3). (c) If Ï2 is known, explain how
Showing 70800 - 70900
of 88243
First
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
Last
Step by Step Answers