# Question: Refer to the list of warnings on pages 527 528

Refer to the list of warnings on pages 527– 528. Explain which ones should be of concern if the sample size(s) for a test are large.

From this discussion, you should realize that you can’t simply rely on news reports to determine what to conclude from the results of studies. In particular, you should heed the following warnings:

1. If the word significant is used to try to convince you that there is an important effect or relationship, determine if the word is being used in the usual sense or in the statistical sense only.

2. If a study is based on a very large sample size, relationships found to be statistically significant may not have much practical importance.

3. If you read that “no difference” or “no relationship” has been found in a study, try to determine the sample size used. Unless the sample size was large, remember that an important relationship may well exist in the population but that not enough data were collected to detect it. In other words, the test could have had very low power.

4. If possible, learn what confidence interval accompanies the hypothesis test, if any. Even then you can be misled into concluding that there is no effect when there really is, but at least you will have more information about the magnitude of the possible difference or relationship.

5. Try to determine whether the test was one- sided or two-sided. If a test is one-sided, as in Case Study 24.1, and details aren’t reported, you could be misled into thinking there would be no significant difference in a two-sided test, when in fact there was one in the direction opposite to that hypothesized.

6. Remember that the decision to do a one- sided test must be made before looking at the data, based on the research question. Using the same data to both generate and test the hypotheses is cheating. A one- sided test done that way will have a p-value smaller than it should, making it easier to reject the null hypothesis.

7. Beware of multiple testing and multiple comparisons. Sometimes researchers perform a multitude of tests, and the reports focus on those that achieved statistical significance. If all of the null hypotheses tested are true, then over the long run, about 1 in 20 tests should achieve statistical significance just by chance. Beware of reports in which it is evident that many tests were conducted, but in which results of only one or two are presented as “significant.”

From this discussion, you should realize that you can’t simply rely on news reports to determine what to conclude from the results of studies. In particular, you should heed the following warnings:

1. If the word significant is used to try to convince you that there is an important effect or relationship, determine if the word is being used in the usual sense or in the statistical sense only.

2. If a study is based on a very large sample size, relationships found to be statistically significant may not have much practical importance.

3. If you read that “no difference” or “no relationship” has been found in a study, try to determine the sample size used. Unless the sample size was large, remember that an important relationship may well exist in the population but that not enough data were collected to detect it. In other words, the test could have had very low power.

4. If possible, learn what confidence interval accompanies the hypothesis test, if any. Even then you can be misled into concluding that there is no effect when there really is, but at least you will have more information about the magnitude of the possible difference or relationship.

5. Try to determine whether the test was one- sided or two-sided. If a test is one-sided, as in Case Study 24.1, and details aren’t reported, you could be misled into thinking there would be no significant difference in a two-sided test, when in fact there was one in the direction opposite to that hypothesized.

6. Remember that the decision to do a one- sided test must be made before looking at the data, based on the research question. Using the same data to both generate and test the hypotheses is cheating. A one- sided test done that way will have a p-value smaller than it should, making it easier to reject the null hypothesis.

7. Beware of multiple testing and multiple comparisons. Sometimes researchers perform a multitude of tests, and the reports focus on those that achieved statistical significance. If all of the null hypotheses tested are true, then over the long run, about 1 in 20 tests should achieve statistical significance just by chance. Beware of reports in which it is evident that many tests were conducted, but in which results of only one or two are presented as “significant.”

## Answer to relevant Questions

Refer to the list of warnings on pages 527–528. Explain which ones should be of concern if the sample size(s) for a test are small. From this discussion, you should realize that you can’t simply rely on news reports to ...One of the quotes in News Story 1 refers to the results of measuring antibodies to the influenza vaccine. It says: While everyone who participated in the study had an increased number of antibodies, the mediators had a ...An article in Time magazine (Gorman, 6 February 1995) reported that an advisory panel recommended that the Food and Drug Administration (FDA) allow an experimental AIDS vaccine to go forward for testing on 5000 volunteers. ...Science News (25 January 1995) reported a study on the relationship between levels of radon gas in homes and lung cancer. It was a case-control study, with 538 women with lung cancer and 1183 without lung cancer. The article ...A classic psychology experiment was conducted by psychology professor Philip Zimbardo in the summer of 1971 at Stanford University. The experiment is described at the website http://www.prisonexp.org/. Visit the website; if ...Post your question