We know that IQs are normally distributed with a mean of 100 and standard deviation of 15. Suppose you want to verify this, so you take 100 random samples of size four each and, for each sample, find a 95% confidence interval for the mean IQ. You expect that approximately 95 of these intervals will contain the true mean IQ (100) and approximately five of these intervals will not contain the true mean. Use simulation in Excel to see whether this is the case.