Despite seemingly hanging in the balance for a few days, realistically the result of the Scottish independence referendum was never seriously in doubt.
The margin of the victory is perhaps something of a surprise, however.
Given how close the polls had it going into Thursday, a 55 / 45 split is a larger margin of victory for the No campaign than many were predicting.
It obviously wasn’t a surprise to the bookies though, who were offering odds of just 1/5 on a No majority, compared to a generous sounding 3/1 in some places on a Yes majority.
As the old adage goes, you never see a bookie on a bike, so with £50m being wagered on the result they must have been very confident of the outcome.
One guy who got the result pretty much bang on did so by the entirely unscientific method of asking 655 random Grindr users whether they thought Scotland should be an independent country. His result was 54 / 46, just one percentage point out.
This was closer to the actual result than polls on Wednesday night from three separate professional polling organisations: YouGov, IPSOS-MORI and Survation:
Obviously it could just be an amusing blip, but there might be more to it than first appears. It seems there is a way to poll unrepresentative samples and get an accurate result.
This thought-provoking article in the New York Times suggests that polling representative samples of the population to ask how they intend to vote is a weaker predictor of the outcome than asking them who they think will win instead.
Forecasting Elections: Voter Intentions versus Expectations shows that asking people to consider their expectation of the outcome prompts them to mentally picture how 20 of their friends or family are likely to vote, which is ultimately a better predictor of the final result than only understanding their own voting intentions:
Surveys of voting intentions depend critically on being able to poll representative cross-sections of the electorate. By contrast, we find that surveys of voter expectations can still be quite accurate, even when drawn from non-representative samples. The logic of this claim comes from the difference between asking about expectations, which may not systematically differ across demographic groups, and asking about intentions, which clearly do.
This is fascinating stuff and potentially revolutionary for the research industry as a whole, not just within the world of politics:
Market researchers ask variants of the voter intention question in an array of contexts, asking questions that elicit your preference for one product, over another. Likewise, indices of consumer confidence are partly based on the stated purchasing intentions of consumers, rather than their expectations about the purchase conditions for their community. The same insight that motivated our study—that people also have information on the plans of others—is also likely relevant in these other contexts. Thus, it seems plausible that survey research in many other domains may also benefit from paying greater attention to people’s expectations than to their intentions.
This insight could surely be applied to the communication industry.
Would we understand more about a campaign’s chances of success if we stop asking people if they like the ad creative and ask them instead whether they think their friends would like it?
Would it be a better indicator of sales success if we ask people to predict whether a campaign would be likely to make other members of their family buy the advertised product?
Are any researchers out there currently using this technique? Anyone willing to give it a go?
I’d love to know the outcome.