An engineer is measuring a quantity q. It is assumed that there is a random error in

Question:

An engineer is measuring a quantity q. It is assumed that there is a random error in each measurement, so the engineer will take n measurements and reports the average of the measurements as the estimated value of q. Specifically, if Yi is the value that is obtained in the i'th measurement, we assume that Yi = q +Xi, where Xi is the error in the ith measurement. We assume that Xi's are i.i.d. with EXi = 0 and Var(Xi) = 4 units. The engineer reports the average of measurementsMn Y+Y+...+Yn n

How many measurements does the engineer need to make until he is 95% sure that the final error is less than 0.1 units? In other words, what should the value of n be such thatP(q-0.1  M q +0.1)  0.95 ?

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Question Posted: