Question: Given two estimates, 1 and 2, each unbiased for the parameter , we know that the better one is the one with smaller variance. But
Given two estimates, 1 and 2, each unbiased for the parameter , we know that the "better" one is the one with smaller variance. But it doesn't say anything about the more fundamental question of how good 1 and 2 are relative to the infinitely many other unbiased estimators for . For example, is there a 3 that has a smaller variance than either 1 or 2 has? Can we identify the unbiased estimator having the smallest variance? Addressing those concerns is one of the most elegant, yet practical, theorems in all of mathematical statistics, a result known as the Cramr-Rao lower bound. Suppose a random sample of size n is taken from, say, a probability distribution fX(x; ), where is an unknown parameter. Associated withfX(x; ) is a theoretical limit below which the variance of any unbiased estimator for cannot fall. That limit is the Cramr-Rao lower bound. If the variance of a given is equal to the Cramr-Rao lower bound, we know that estimator is optimal in the sense that no unbiased can estimate with greater precision. Now suppose that the random variables X1, X2, . . . , Xn denote the number of successes (0 or 1) in each of n independent trials, where p, the probability of success at any given trial, is an unknown parameter. Then fXi (x; p) = p x (1 p) 1x , x = 0, 1, and 0 < p < 1. Let X = X1 + X2 + + Xn = total number of successes. Consider the following statements.
I. p = X n is an unbiased estimator for p.
II. Var(p) = 1p n .
III. The Cramr-Rao lower bound for fXi (x; p) is p(1p) n .
IV. No unbiased estimator of p can possibly be more precise than p = X n . Which of the above statement(s) is/are correct.
(A) Only IV.
(B) Only I and IV.
(C) Only I, III, and IV.
(D) Only I and III.
(E) Only I, II, III, and IV.
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
