Question: I need the math problem to the question highlighted in yellow. interpret the results and what the confidence interval and margin of error mean as

I need the math problem to the question highlighted in yellow.

interpret the results and what the confidence interval and margin of error mean as they apply to the specific subject matter.

Understanding a poll's margin of error:: [Final Edition]

McKnight, Peter.The Vancouver Sun; Vancouver, B.C.[Vancouver, B.C]03 June 2006: C5.

  1. Full text
  2. Abstract/Details

Abstract

Translate

Abstract

Further, there's nothing magical about the 95 per cent confidence interval -- it's used by convention, not because it's of any inductive merit. Ipsos-Reid could have chosen the 85 per cent confidence interval, which would have reduced the margin of error to roughly plus or minus two percentage points, and hence Conservative support would be estimated at between 41 and 45 per cent. Similarly, the 85 per cent confidence interval in the study from two months ago would peg Conservative support at between 36 and 40 per cent.

These non-sampling errors -- and there are many more -- can have a larger effect than sampling error on a poll's accuracy. Yet when we rely exclusively on the margin of error to tell us how accurate a poll is, we are making the implicit assumption that no non-sampling errors have been made.

To be sure, polling companies employ a variety of sophisticated procedures to help them avoid non-sampling errors. But since, unlike sampling error, non-sampling errors are not easily quantified, the margin of error remains the only measure of a poll's accuracy that pollsters usually provide. Hence, it's not surprising that many people overestimate its importance.

Less

Full Text

  • Translate
  • Full text

OPINION I If we want a clear picture of a poll's accuracy we needto take a comprehensive look at the entire methodology

The ink was barely dry on The Vancouver Sun's report about Canadians' increasing support for the federal Conservatives when the phone calls and letters started.

Our story, which was based on a recent Ipsos-Reid poll, said that Conservative support is "soaring" across the country. After all, the pollsters concluded that the Conservatives currently enjoy the support of 43 per cent of Canadians, up from 38 per cent two months ago.

Certainly, one can argue that an increase of five percentage points hardly amounts to "soaring" support -- and some people who accused us of shilling for the Conservatives did so argue -- but that wasn't what exercised our more mathematically inclined readers.

No, said our critics, the real problem was that we failed to consider the "margin of error" -- plus or minus 3.1 percentage points, 19 times out of 20 -- which means Conservative support could be anywhere from 39.9 to 46.1 per cent.

Similarly, assuming that the previous poll had the same margin of error, Conservative support two months ago was between 34.9 and 41.1 per cent. Now, since the ranges of the two polls overlap, we can't really conclude that support has increased at all, let alone soared.

Or so our critics claimed. And they were certainly right about one thing: All too often, journalists dutifully report a poll's margin of error, and then completely ignore what significance it might have. But our critics' slavish attention to the margin of error is just as much of a problem.

To see this, we need to look at the mechanics of polling. In an ideal world, we would ask every voter in Canada which party they support, but in a world of limited resources we can't poll everyone, so we ask a sample of people -- typically 1,000 -- and generalize those results to the population.

But samples can be subject to error. Even with a random sample -- that is, one where every voter in the population has an equal chance of being polled -- it's still possible to get an inaccurate result because, through sheer bad luck, the opinions of people in the sample might not be representative of the opinions of the population as a whole.

In effect, the margin of error provides a measure of this sheer bad luck, or "sampling error," and is accompanied by a "confidence interval." Most polls use a 95 per cent confidence interval, and this is what the "19 times out of 20" refers to.

And here's what it means: If we were to repeat the Ipsos-Reid poll thousands of times, drawing different random samples of 1,000 people from the population, we would no doubt get different results, because not all samples are exactly the same. So, some polls would estimate that Conservative support is at 42 per cent, plus or minus 3.1 percentage points, others would peg it at 44 per cent and so on.

The 95 per cent confidence interval, combined with the margin of error, tell us that 95 per cent of these thousands of polls will produce an estimate of Conservative support that is within 3.1 percentage points of the true value, while five per cent will be off by more than 3.1 points.

This doesn't, however, tell us anything specific about an individual poll. We have no way of knowing whether the Ipsos-Reid poll would be among the 95 per cent which are within 3.1 percentage points of the true value of support, or among the five per cent which aren't. In other words, we can't be certain that Conservative support is between 39.9 and 46.1 per cent, let alone that it's exactly 43 per cent.

Further, there's nothing magical about the 95 per cent confidence interval -- it's used by convention, not because it's of any inductive merit. Ipsos-Reid could have chosen the 85 per cent confidence interval, which would have reduced the margin of error to roughly plus or minus two percentage points, and hence Conservative support would be estimated at between 41 and 45 per cent. Similarly, the 85 per cent confidence interval in the study from two months ago would peg Conservative support at between 36 and 40 per cent.

Now notice what's happened here: The ranges of the two polls no longer conflict, which means our critics would be forced to acknowledge that there might really have been an increase in Conservative support.

Yet the data haven't changed -- only our way of quantifying sampling error has. This is why overreliance on the margin of error is as bad as underreliance: It leads us to automatically dismiss any changes that are within the margin of error at the 95 per cent confidence level, even though those changes sometimes reflect actual changes in the population.

Further, and more important, sampling error is only one factor affecting the accuracy of a poll. There are also myriad non- sampling errors, such as non-response -- people of certain political persuasions may be less (or more) likely to answer a pollster's question -- bias in the wording or order of the questions, and interviewer bias.

These non-sampling errors -- and there are many more -- can have a larger effect than sampling error on a poll's accuracy. Yet when we rely exclusively on the margin of error to tell us how accurate a poll is, we are making the implicit assumption that no non-sampling errors have been made.

To be sure, polling companies employ a variety of sophisticated procedures to help them avoid non-sampling errors. But since, unlike sampling error, non-sampling errors are not easily quantified, the margin of error remains the only measure of a poll's accuracy that pollsters usually provide. Hence, it's not surprising that many people overestimate its importance.

The solution, of course, is not to ignore the margin of error, as many journalists do, but to understand just what it can and can't tell us. And if we really want a clear picture of the accuracy of a poll, we need to ask for more information, to take a comprehensive look at the poll's entire methodology, to colour outside of the

Word count:1075

(Copyright Vancouver Sun 2006)

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Mathematics Questions!