Question

The standard deviation (or, as it is usually called, the standard error) of the sampling distribution for the sample mean, x , is equal to the standard deviation of the population from which the sample was selected, divided by the square root of the sample size. That is,
σx-bar = σ/√n
a. As the sample size is increased, what happens to the standard error of x-bar? Why is this property considered important?
b. Suppose a sample statistic has a standard error that is not a function of the sample size. In other words, the standard error remains constant as n changes. What would this imply about the statistic as an estimator of a population parameter?
c. Suppose another unbiased estimator (call it A) of the population mean is a sample statistic with a standard error equal to
σA = σ/3√n
Which of the sample statistics, x-bar or A, is preferable as an estimator of the population mean? Why?
d. Suppose that the population standard deviation s is equal to 10 and that the sample size is 64. Calculate the standard errors of x-bar and A . Assuming that the sampling distribution of A is approximately normal, interpret the standard errors. Why is the assumption of (approximate) normality unnecessary for the sampling distribution of x-bar?


$1.99
Sales0
Views63
Comments0
  • CreatedMay 20, 2015
  • Files Included
Post your question
5000