New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
mathematics
statistics
Statistical Inference 2nd edition George Casella, Roger L. Berger - Solutions
Again, suppose we have a random sample X1,..., Xn from 1/Ï f[(x - 9)/Ï), a location- scale pdf, but we are now interested in estimating Ï2. We can consider three groups of transformations:G1 = (ga, c(x): - 0},where ga, c(x1,... ,xn) = (cx1 + a,..., cxn
Let X1,..., Xn be independent random variables with pdfswhere θ > 0. Find a two-dimensional sufficient statistic for θ.
Let X1,..., Xn be a random sample from a gamma (α, β) population. Find a two- dimensional sufficient statistic for (α, β),
Let f(x, y|θ1, θ2, θ3, θ4) be the bivariate pdf for the uniform distribution on the rectangle with lower left corner (θ1, θ2) and upper right corner (θ3, θ4) in ℜ2. The parameters satisfy θ1 < θ2 and θ3 < θ4. Let (X1, Y1),..., (Xn, Yn) be a random sample from this pdf. Find a
For each of the following distributions let X1,..., Xn be a random sample. Find a minimal sufficient statistic for θ.a.b.c.d.e.
One observation is taken on a discrete random variable X with pmf f(x|θ), where 0 {1, 2,3}. Find the MLE of θ.
The independent random variables X1,..., Xn have the common distributionwhere the parameters α and β are positive.a. Find a two-dimensional sufficient statistic for (α, β).b. Find the MLEs of α and β.c. The length (in millimeters) of cuckoos’ eggs found in hedge sparrow nests can be
Let X1,..., Xn be iid with pdf f(x|θ) = θxθ-1, 0 ≤ x ≤ 1, 0 < θ < ∞ a. Find the MLE of θ, and show that its variance → 0 as → ∞. b. Find the method of moments estimator of θ.
Let X1,..., Xn be a random sample from a population with pmf Pθ(X = x) = θx (1 - θ)1-x, x = 0 or 1, 0 ≤ θ ≤ 1/2. a. Find the method of moments estimator and MLE of θ. b. Find the mean squared errors of each of the estimators. c. Which estimator is preferred? Justify your choice.
Let X1,., Xn be a sample from a population with double exponential pdf f(x|θ) = 1/2e-|x-θ|, - ∞ < x < ∞, - ∞ < θ < ∞ Find the MLE of θ.
Let X1, X2,..., Xn be a sample from the inverse Gaussian pdf,a. Show that the MLEs of μ and λ are b. Tweedie (1957) showed that n and n are independent, n having an inverse Gaussian distribution with parameters μ and nλ, and nλ/n
The Borel Paradox (Miscellanea 4.9.3) can also arise in inference problems. Suppose that X1 and X2 are iid exponential(θ) random variables. a. If we observe only X2, show that the MLE of is θ = X2. b. Suppose that we instead observe only Z = (X2 - 1)/X1. Find the joint distribution of (X1 ,Z),
Let (X1, Y1),..., (Xn, Yn) be iid bivariate normal random variables (pairs) where all five parameters are unknown.a. Show that the method of moments estimators for μX, μY, Ï2Y, Ï are X =b. Derive the MLEs of the unknown parameters and show that they
Suppose that the random variables Y1,..., Yn satisfy Yi = βxi + ∈i, i = 1,..., n, where x1, ..., xn are fixed constants, and ∈1,..., ∈n are iid n(0, σ2), σ2 unknown. a. Find a two-dimensional sufficient statistic for (β, σ2). b. Find the MLE of β, and show that it is an unbiased
Let X1,..., Xn be a random sample from a gamma(a, (β) population. a. Find the MLE of β, assuming α is known. b. If α and β are both unknown, there is no explicit formula for the MLEs of α and β, but the maximum can be found numerically. The result in part (a) can be used to reduce the
Consider Y1,..., Yn as defined in Exercise 7.19. a. Show that ∑Yi/∑xi is an unbiased estimator of β. b. Calculate the exact variance of ∑Yi/∑xi xi and compare it to the variance of the MLE.
Again, let Y1,..., Yn be as defined in Exercise 7.19. a. Show that [∑(Yi/xi)] is also an unbiased estimator of β. b. Calculate the exact variance of [∑(Yi/xi)] /n and compare it to the variances of the estimators in the previous two exercises.
This exercise will prove the assertions in Example 7.2.16, and more. Let X1,..., Xn be a random sample from a n(θ, σ2) population, and suppose that the prior distribution on θ is n(μ, τ2). Here we assume that σ2, μ, and τ2 are all known. a. Find the joint pdf of X and θ. b. Show that
If S2 is the sample variance based on a sample of size n from a normal population, we know that (n - l)S2/Ï2 has a X2n-1 distribution. The conjugate prior for Ï2 is the inverted gamma pdf, IG(α, β), given bywhere α and β are
Let X1,...,Xn be iid Poisson(λ), and let λ have a gamma(α, β) distribution, the conjugate family for the Poisson. a. Find the posterior distribution of λ. b. Calculate the posterior mean and variance.
We examine a generalization of the hierarchical (Bayes) model considered in Example 7.2.16 and Exercise 7.22. Suppose that we observe X1,... , Xn, whereXi|θi ~ n (θi|σ2), i = l,...,n, independent,θi ~ n(μ, τ2), i = l,...,n, independent.a. Show that the marginal distribution of X, is n(μ, σ2
In Example 7.2.16 we saw that the normal distribution is its own conjugate family. It is sometimes the case, however, that a conjugate prior does not accurately reflect prior knowledge, and a different prior is sought. Let X1,..., Xn be iid n(θ, σ2), and let θ have a double exponential
Refer to Example 7.2.17.a. Show that the likelihood estimators from the complete-data likelihood (7.2.11) are given by (7.2.12).b. Show that the limit of the EM sequence in (7.2.23) satisfies (7.2.16)c. A direct solution of the original (incomplete-data) likelihood equations is possible. Show that
An alternative to the model of Example 7.2.17 is the following, where we observe (Yi, Xi), i = 1,2, ...,n, where Yi ~ Poisson(mβÏi) and (X1,..., Xn) ~ multinomial(m;Ï), where Ï = (Ï1, Ï2,..., Ïn) with ni=1
Given a random sample X1,..., Xn from a population with pdf f(x|θ), show that maximizing the likelihood function, L(θ|x), as a function of θ is equivalent to maximizing log L(θ|x).
Prove Theorem 7.2.20.a. Show that, using (7.2.19), we can writeand, since (r+1) is a maximum, When is the inequality and equality? b. Now use Jensen's inequality to show that Which together with part (a) proves the theorem.
In Example 7.3.5 the MSE of the Bayes estimator, B, of a success probability was calculated (the estimator was derived in Example 7.2.14). Show that the choice α = β = √n/4 yields a constant MSE for B.
The Pitman Estimator of Location (see Lehmann and Casella 1998 Section 3.1, or the original paper by Pitman 1939) is given bywhere we observe a random sample X1,..., Xn from f(x - θ). Pitman showed that this estimator is the location-equivariant estimator with smallest mean squared
Let X1,..., Xn be a random sample from a population with pdf f(x|θ) = 1/2θ, -θ < x < θ, θ > 0. Find, if one exists, a best unbiased estimator of θ.
For each of the following distributions, let X1,..., Xn be a random sample. Is there a function of θ, say g(θ), for which there exists an unbiased estimator whose variance attains the Cramer-Rao Lower Bound? If so, find it. If not, show why not. (a) f(x|θ) = θxθ-1, 0 < x < 1, θ > 0 (b)
Prove Lemma 7.3.11.
Let X1,..., Xn be iid Bernoulli(p). Show that the variance of attains the Cramer-Rao Lower Bound, and hence is the best unbiased estimator of p.
Let X1,.., Xn be a random sample from a population with mean μ and variance Ï2.(a) Show that the estimatoris an unbiased estimator of μ if (b) Among all unbiased estimators of this form (called linear unbiased estimators) find the one with minimum variance,
Exercise 7.42 established that the optimal weights are q*i = (1/Ï2i)/(j 1/Ï2j). A result due to Tukey (see Bloch and Moses 1988) states that if W = i qiWi is an estimator based on another sets of weights qi ¥ 0, i gi = 1, thenwhere
Let X1,..., Xn be iid n(θ, 1). Show that the best unbiased estimator of θ2 is 2 - (1/n). Calculate its variance (use Stein's Identity from Section 3.6), and show that it is greater than the Cramer-Rao Lower Bound.
Let X1, X2,..., Xn be iid from a distribution with mean μ and variance Ï2, and let S2 be the usual unbiased estimator of Ï2. In Example 7.3.4 we saw that, under normality, the MLE has smaller MSE than S2. In this exercise will explore variance estimates some
Let X1, X2, and X3 be a random sample of size three from a uniform(θ, 2θ) distribution, where θ > 0. (a) Find the method of moments estimator of θ. (b) Find the MLE, θ, and find a constant k such that Eθ(kθ) = θ. (c) Which of the two estimators can be improved by using sufficiency? How? (d)
Suppose that when the radius of a circle is measured, an error is made that has a n(0, σ2) distribution. If n independent measurements are made, find an unbiased estimator of the area of the circle. Is it best unbiased?
Suppose that Xi, i = 1,..., n, are iid Bernoulli(p). (a) Show that the variance of the MLE of p attains the Cramer-Rao Lower Bound. (b) For n ≥ 4, show that the product X1X2X3X4 is an unbiased estimator of p4, and use this fact to find the best unbiased estimator of p4.
Let X1,..., Xn be iid exponential (λ). (a) Find an unbiased estimator of λ based only on Y = min{X1,..., Xn}. (b) Find a better estimator than the one in part (a). Prove that it is better. (c) The following data are high-stress failure times (in hours) of Kevlar/epoxy spherical vessels used in a
Consider estimating the binomial parameter k as in Example 7.2.9. a. Prove the assertion that the integer that satisfies the inequalities and is the MLE is the largest integer less than or equal to 1/. b. Let p = 1/2, n = 4, and X1 = 0, X2 = 20, X3 = 1, and X4 = 19. What is ?
Let X1,..., Xn be iid n(θ, θ2), θ > 0. For this model both and cS are unbiased estimators of θ, where(a) Prove that for any number a the estimator a + (l - a)(cS) is an unbiased estimator of θ. (b) Find the value of a that produces
Gleser and Healy (1976) give a detailed treatment of the estimation problem in the n(θ, aθ2) family, where o is a known constant (of which Exercise 7.50 is a special case). We explore a small part of their results here. Again let X1... ,Xn be iid n(θ,θ2), θ > 0, and let and cS be as in
Let X1,...,Xn be iid Poisson(A), and let and S2 denote the sample mean and variance, respectively. We now complete Example 7.3.8 in a different way. There we used the Cramer-Rao Bound; now we use completeness. (a) Prove that is the best unbiased estimator of λ without using the Cramer-Rao
Finish some of the details left out of the proof of Theorem 7.3.20. Suppose W is an unbiased estimator of τ(θ), and U is an unbiased estimator of 0. Show that if, for some θ = θo, Covθ0(W, U) ≠ 0, then W cannot be the best unbiased estimator of τ(θ).
For each of the following pdfs, let X1,..., Xn be a sample from that distribution. In each case, find the best unbiased estimator of θÏ.(a) f(x|θ) = 1/e|, 0(b) f(x|θ) = e-(x-e) ,x>θ (c)
Prove the assertion made in the text preceding Example 7.3.24: If T is a complete sufficient statistic for a parameter 0, and h(X1,... ,Xn) is any unbiased estimator of τ(0), then ϕ(T) = E(h(X1,..., Xn)|T) is the best unbiased estimator of τ(0).
Let X1,..., Xn+1 be iid Bernoulli(p), and define the function h(p) bythe probability that the first n observations exceed the (n + l)st. (a) Show that is an unbiased estimator of h(p). (b) Find the best unbiased estimator of h(p).
Let X1,..., Xn+1 be iid n(p, σ2). Find the best unbiased estimator of σ2, where p is a known positive constant, not necessarily an integer.
Let X1,..., Xn be a random sample from the pdf f(x|θ) = θx-2, 0 < θ < x < ∞. a. What is a sufficient statistic for θ? b. Find the MLE of θ. c. Find the method of moment's estimator of θ.
Show that the log of the likelihood function for estimating a2, based on observing S2 ~ Ï2x2v/v, can be written in the formwhere K1, K2, and K3 are constants, not dependent on Ï2. Relate the above log likelihood to the loss function discussed in Example 7.3.27. See Anderson
Let X ~ n(μ,1). Let δ* be the Bayes estimator of μ for squared error loss. Compute and graph the risk functions, R(μ,δπ), for π(p) ~ n(0,1) and π(μ) ~ n(0,10). Comment on how the prior affects the risk function of the Bayes estimator.
A loss function investigated by Zellner (1986) is the LINEX (LINear-EXponential) loss, a loss function that can handle asymmetries in a smooth way. The LINEX loss is given byL(8,a) = ec(a-θ) - c(a - 8) - 1,where c is a positive constant. As the constant c varies, the loss function
The jackknife is a general technique for reducing bias in an estimator (Quenouille, 1956). A one-step jackknife estimator is defined as follows. Let X1,..., Xn be a random sample, and let Tn = Tn(X1,..., Xn) be some estimator of a parameter 6. In order to "jackknife" Tn we calculate the n
Let X1,..., Xn be iid with one of two pdfs. If θ = 0, thenwhile if θ = 1, then Find the MLE of θ.
One observation, X, is taken from a n(0, σ2) population. a. Find an unbiased estimator of σ2. b. Find the MLE of σ. c. Discuss how the method of moments estimator of a might be found.
Let X1,..., Xn be iid with pdf f(x|θ) = 1/θ, 0 ≤ x ≤ θ, θ > 0 Estimate θ using both the method of moments and maximum likelihood. Calculate the means and variances of the two estimators. Which one should be preferred and why?
In 1,000 tosses of a coin, 560 heads and 440 tails appear. Is it reasonable to assume that the coin is fair? Justify your answer.
Let X1,...,Xn be iid Poisson(A), and let A have a gamma(a,β) distribution, the conjugate family for the Poisson. In Exercise 7.24 the posterior distribution of A was found, including the posterior mean and variance. Now consider a Bayesian test of H0: λ < λn vs H1: λ > λ0. (a) Calculate
In Exercise 7.23 the posterior distribution of σ2, the variance of a normal population, given S2, the sample variance based on a sample of size n, was found using a conjugate prior for σ2 (the inverted gamma pdf with parameters a and 3). Based on observing S2, a decision about the hypotheses Ho:
For samples of size n = 1,4,16,64,100 from a normal population with mean p and known variance σ2, plot the power function of the following LRTs. Take a = .05. (a) H0: μ < 0 versus H1: μ > 0 (b) H0: μ = 0 versus H1: μ ≠ 0
Let X1,X2 be iid uniform(θ,θ + 1). For testing H0: θ = 0 versus H1: θ > 0, we have two competing tests:(a) Find the value of C so that Ï2 has the same size as Ïi. (b) Calculate the power function of each test. Draw a
For a random sample X1,..., Xn of Bernoulli(p) variables, it is desired to test H0: p- .49 versus H1: p= .51. Use the Central Limit Theorem to determine, approximately, the sample size needed so that the two probabilities of error are both about .01. Use a test function that rejects Ho if ∑ni = 1
Show that for a random sample X1,..., Xn from a n(0, Ï2) population, the most powerful test of H0 : Ï = Ï0 versus H1: Ï = Ï1 , where Ï0For a given value of a, the size of the Type I Error, show how the value of c is explicitly
One very striking abuse of a levels is to choose them after seeing the data and to choose them in such a way as to force rejection (or acceptance) of a null hypothesis. To see what the true Type I and Type II Error probabilities of such a procedure are, calculate size and power of the following two
Suppose that X1,..., An are iid with a beta(p, 1) pdf and Y1,..., Ym are iid with a beta(0,1) pdf. Also assume that the As are independent of the Ys.(a) Find an LRT of H0: μ = p versus H1: 0 μ.(b) Show that the test in part (a) can be based on the
Let X1,..., Xn be a random sample from a n(θ,σ2) population, σ2 known. An LRT of H0: θ = θ0 versus H1: ≠ θ0 is a test that rejects H0 if | - θ0 | / (σ / √n) > c. (a) Find an expression, in terms of standard normal probabilities, for the power function of this test. (b) The
The random variable A has pdf f(x) = e-x,x > 0. One observation is obtained on the random variable Y = Xθ, and a test of H0 : θ =1 versus H1 : θ = 2 needs to be constructed. Find the UMP level α = .10 test and compute the Type II Error probability.
In a given city it is assumed that the number of automobile accidents in a given year follows a Poisson distribution. In past years the average number of accidents per year was 15, and this year it was 10. Is it justified to claim that the accident rate has dropped?
Let A be a random variable whose pmf under Ho and Hi is given byUse the Neyman-Pearson Lemma to find the most powerful test for H0 versus H1 with size α = .04. Compute the probability of Type II Error for this test.
Let Xi,..., X10 be iid Bernoulli(p). (a) Find the most powerful test of size α = .0547 of the hypotheses H0: p = 1/2 versus H1: p = 1/4. Find the power of this test. (b) For testing H0 : p ≤ 1/2 versus H1 : p > 1/2, find the size and sketch the power function of the test that rejects H0 if
Suppose A is one observation from a population with beta(0,1) pdf. (a) For testing H0: θ ≤ 1 versus H1: θ > 1, find the size and sketch the power function of the test that rejects H0 if X > 1/2 (b) Find the most powerful level α test of H0: θ = 1 versus H1: θ = 2. (c) Is there a UMP test of
Find the LRT of a simple H0 versus a simple H1. Is this test equivalent to the one obtained from the Neyman-Pearson Lemma? (This relationship is treated in some detail by Solomon 1975.)
Show that each of the following families has an MLR. (a) n(θ,σ2) family with σ2 known (b) Poisson (θ) family (c) binomial(n,θ) family with n known
(a) Show that if a family of pdfs {f(x|θ): θ ∈ ⊝} has an MLR, then the corresponding family of cdfs is stochastically increasing in 6. (See the Miscellanea section.) (b) Show that the converse of part (a) is false; that is, give an example of a family of cdfs that is stochastically increasing
Suppose g(t|θ) = h(t)c(θ)ew(θ)t,t is a one-parameter exponential family for the random variable T. Show that this family has an MLR if w(θ) is an increasing function of θ. Give three examples of such a family.
Let f(x|θ) be the logistic location pdf
Let X be one observation from a Cauchy(θ) distribution.(a) Show that this family does not have an MLR.(b) Show that the testis most powerful of its size for testing H0: θ = 0 versus H1. θ = 1. Calculate the Type I and Type II Error probabilities. (c) Prove or
Here, the LRT alluded to in Example 8.2.9 will be derived. Suppose that we observe m iid Bernoulli(θ) random variables, denoted by Y1,. . . ., Ym. Show that the LRT of H0: θ < θo versus H1: θ > θo will reject H0 if ∑mi =1 Yi > b.
Let f{x|θ) be the Cauchy scale pdf(a) Show that this family does not have an MLR. (b) If X is one observation from f(x|θ), show that |X| is sufficient for θ and that the distribution of |X| does have an MLR.
Let X1,..., Xn be iid Poisson(λ). (a) Find a UMP test of H0: λ < λ0 versus H1: λ > λ0. (b) Consider the specific case H0 : λ < 1 versus H1 : λ > 1. Use the Central Limit Theorem to determine the sample size n so a UMP test satisfies P(reject H0|λ = 1) = .05 and P(reject H0|λ = 2) = .9.
Let X1,... ,Xn be a random sample from the uniformed(θ,θ + 1) distribution. To test H0: θ = 0 versus H1: θ > 0, use the test reject H0 if Yn ≥ 1 or Y1 > k, where k is a constant, Y1 = min{X1,..., Xn}, Yn = max{X1,..., Xn}. (a) Determine k so that the test will have size α. (b) Find an
The usual t distribution, as derived in Section 5.3.2, is also known as a central t distribution. It can be thought of as the pdf of a random variable of the form T = n(0,1)/X2v/v, where the normal and the chi squared random variables are independent. A generalization of the t
Let X1,..., Xn be a random sample from a n(θ, Ï2) population. Consider testingH0: θ θ0.(a) If Ï2 is known, show that the test that rejects Ho whenis a test of size a. Show that the test can be derived as an LRT. (b) Show that the test in
Let X1,..... Xn be iid n(θ, Ï2), where do is a specified value of θ and Ï2 is unknown. We are interested in testingH0: θ = θ0 versus H1: θ0.(a) Show that the test that rejects Ho whenis a test of
Let (X1, Y1),...,...,(Xn, Yn) be a random sample from a bivariate normal distribution with parameters μx,μY, p. We are interested in testingH0: μx = μY versus H1: μx μY.(a) Show that the random
Prove the assertion made in the text-after Definition 8.2.1. If f(x|θ) is the pmf of a discrete random variable, then the numerator of λ(x), the LRT statistic, is the maximum probability of the observed sample when the maximum is computed over parameters in the null hypothesis. Furthermore, the
Let X1,...,Xn be a random sample from a n(μx, Ï2x), and let Y1,...,Ym be an independent random sample from a n(μY, Ï2Y). We are interested in testingH0: μx = μY versus H1: μx
The assumption of equal variances, which was made in Exercise 8.41, is not always tenable. In such a case, the distribution of the statistic is no longer a t. Indeed, there is doubt as to the wisdom of calculating a pooled variance estimate. (This problem, of making inference on means when
Sprott and Farewell (1993) note that in the two-sample t test, a valid t statistic can be derived as long as the ratio of variances is known. Let X1,..., Xn1 be a sample from a n(μ1, Ï2) and Y1,..., Yn2 a sample from a n(μ2, p2Ï2), where p2 is
Verify that Test 3 in Example 8.3.20 is an unbiased level a test.
Let X1,..., Xn be a random sample from a n(θ, σ2) population. Consider testing H0: θ ≤ θ0 versus H1 : θ > θ0. Let m denote the sample mean of the first m observations, X1,... ,Xm, for m = l,...,n. If θ0 + zα√σ2/m is an unbiased size α test. Graph the power function for each of these
Consider two independent normal samples with equal variances, as in Exercise 8.41. Consider testing H0 - μx - μY ¤ - δ or μx - μY ¥ - δ versus H1: - δ (a) Show that the size a LRT
In each of the following situations, calculate the p-value of the observed data. (a) For testing H0 : θ ≤ 1/2 versus H1 : θ > 1/2,7 successes are observed out of 10 Bernoulli trials. (b) For testing H0 : λ ≤ 1 versus H1 : λ > 1,X = 3 are observed, where X ~ Poisson(λ). (c) For testing
A random sample, X1,..., Xn, is drawn from a Pareto population with pdf(a) Find the MLEs of θ and v. (b) Show that the LRT of H0: θ = 1, v unknown, versus H1: θ 1, v unknown, has critical region of the form {x: T(x) ¤ c1 or T(x) >
Let X1,..., Xn be iid n(θ,σ2), σ2 known, and let θ have a double exponential distribution, that is, π(θ) = e-|0|/α/(2a), a known. A Bayesian test of the hypotheses H0: θ ≤ 0 versus H1: θ > 0 will decide in favor of H1 if its posterior probability is large. (a) For a given constant K,
Here is another common interpretation of p-values. Consider a problem of testing H0 versus H1. Let W(X) be a test statistic. Suppose that for each a, 0 ≤ a ≤ 1, a critical value ca can be chosen so that {x : W(x) ≥ ca} is the rejection region of a size a test of H0- Using this family of
In Example 8.2.7 we saw an example of a one-sided Bayesian hypothesis test. Now we will consider a similar situation, but with a two-sided test. We want to test H0: θ = 0 versus H1: θ ≠ 0, and we observe X1,... ,Xn, a random sample from a n(θ, σ2) population, σ2 known. A type of prior
The discrepancies between p-values and Bayes posterior probabilities are not as dramatic in the one-sided problem, as is discussed by Casella and Berger (1987) and also mentioned in the Miscellanea section. Let X1,...,Xn be a random sample from a n(θ, H0: θ ¤
Consider testing H0 : μ < 0 versus H1 : μ > 0 using 0-1 loss, where X ~ n(μ, 1). Let δc be the test that rejects H0 if X > c. For every test in this problem, there is a δC in the class of tests (δc, -∞ ≤ c ≤ ∞} that has a uniformly smaller (in μ) risk function. Let δ be the test
Showing 70700 - 70800
of 88243
First
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
Last
Step by Step Answers