Frequently Asked Questions
Modern political opinion polls are largely conducted along the lines of those by George Gallup. This method involves sampling a randomly selected group of people that match the population’s overall demographics.
In 1936 the US Presidential election was between Alf Landon and Franklin D. Roosevelt. The Literary Digest saw more than two million people return surveys indicating Alf Landon would win. Gallup instead interviewed a much smaller sample of some tens of thousands but ensured that they more closely matched the predicted electorate. This polling predicted a Roosevelt victory.
Roosevelt won in a landslide.
There are two classes of errors relevant in opinion polls: random sampling errors and systematic errors.
Surveys can only ever question a small sample of the population. We know that the result probably won’t exactly match the true result that we would get if we interviewed everyone in the population. The margin of error describes how close we can reasonably expect a survey result to fall relative to the true population value.
A margin of error of plus or minus 3 percentage points at the 95% confidence level means that if we fielded the same survey 100 times we would expect the result to be within 3 percentage points of the true population value 95 of those times. In this website the 95% confidence level is used. The table below shows the margin of error for different sample sizes. Smaller sample sizes that you might get when a poll is split by for example region or age can have much larger error ranges. Opinion polls will often have sample sizes of around 1000 people [link].
Systematic errors arise when the polling sample does not match the demographics of the intended electorate. The UK Polling Report does a far better job of explaining the importance of correct sampling than I ever could.
Most polls these days use a random selection of people from an online panel to generate the sample. A small number of polls (notably Ipsos MORI) use random telephone dialling. Either way the responses are then weighted to match the expected electorate the poll is intending to question.
Where available this website uses the weighted responses to aggregate results.
When polls are released they are usually accompanied by tables of data that include the weighted responses and any further breakdowns such as by age or region.
These allow more precise calculation of results than headline figures that are normally rounded to the nearest percentage point.
<p>The BPC is the British Polling Council. Most organisations completing opinion polls in Scotland are members. Their purpose as given <a href=”https://www.britishpollingcouncil.org/objects-and-rules/”>here</a> ensures a certain level of validity from the polling results. Polls from organisations who are not members are down-weighted in this website as their results are of unknown quality. </p>
Only once the tables are available can a poll be aggregated in with the rest of the results here. This can sometimes be days or weeks after the headline figures have been reported in the press.
We use a Locally Weighted Scatterplot Smoothing (LOWESS – see here for more details). This looks at each opinion poll takes all of the results leading to that point and calculates the range where it is 95% probable for the next opinion poll to lie.
Polls are weighted more heavily in the model if they are by members of the BPC and have a large sample size. Polls are also down weighted if they don’t include any “don’t know” respondents and if there have been multiple polls by that company in a short space of time.
Nor should we expect them to. Unless you are very unusual your social circles will not be a match for the broad demographics of Scotland. We all tend to live / work / go to school / hang out with people of similar ages similar educational backgrounds who live in similar areas do similar jobs and so on.
The advantage of scientific opinion polling is that it can be a superior method of getting a measure of public support.
To be valid, polls need to match the demographics of the electorate. Polls conducted on Twitter / Facebook / Reddit / newspapers / random websites etc. will not do so. They are at best worthless and at worst intentionally misleading.
It doesn’t matter. All that matters is the polling data.
Do not pay attention to single polls – a single large shift is likely to be a statistical outlier.
Collections of polls showing trends over time are more likely to identify changes in opinion.
Here are some links to other websites you may find interesting: