"People who answer polls are kind of freaks."
That’s what data journalist Nate Silver said in the podcast Model Talk on his own journalism platform FiveThirtyEight.
Silver is known for his election models used, among other things, to predict who the next US president will be. The main fuel for those calculations? Yes, the polls.
So it’s quite strange to call the people who indirectly help you with important information "freaks". But Silver makes an interesting point.
Because people who participate in polls are not necessarily the same as people who do not. And that can have major consequences.
The polls were not great, but they weren’t terrible either
The polls didn’t do a tremendous job in the last US elections. In states like Wisconsin and Florida, Trump was seriously underestimated.
But Silver expects that Trump will eventually be underestimated by 3 to 4 percentage points in national polls and those in swing states. That is comparable to the average error margin in the polls of recent decades, he emphasises. So this year’s polls weren’t particularly awful either.
Still, it is interesting to look at why polls were off, and how they might be improved. This was also done after 2016. Then it became clear that the level of education was an important factor in a person’s vote. Someone with only a high school diploma, for example, turned out to be more inclined to vote for Trump. Pollsters have taken that into account this year.
However, the vote for Trump was again underestimated in 2020. Why? We need to wait for the definitive answer, because you need data that is not yet available – most importantly, official data on turnout.
But there are already some hypotheses – such as a shift in the Hispanic vote – but the most interesting one I found was the idea Silver was referring to: maybe people who take part in polls are "weird".
In America, a poll usually goes like this: a polling agency generates a random telephone number, calls that landline (there are legal restrictions on calling a mobile number) and hopes that someone will pick up.
Pollster David Shor tells in an interview with Vox that about 1% of the people answer. Besides answering questions about their voting behaviour, they also answer questions about things like their age, gender and level of education.
This last set of questions is used to "weight" the results. If, for example, there are relatively few men in the sample, their answers are counted more heavily.
This weighting is useful but has its limits. Sometimes you overlook a characteristic that turns out to be important, such as education level in 2016.
But even worse: perhaps that 1% is fundamentally different from the 99% who don’t answer their phone or say no to the request to participate.
No trust in others, no trust in pollsters
Shor found that people in his sample scored high on "agreeableness" – which measures how friendly and cooperative someone is. That makes sense, because then you’re probably more inclined to cooperate in a poll.
He also included a question about trust in his poll: “Generally speaking, would you say that most people can be trusted or that you can’t be too careful in dealing with people?”
In his own poll, 50% answered that other people can be trusted. In the General Social Survey, which has a much higher response rate, 70%, that was only 30% of the population.
Quite a gap, Shor emphasises. According to him, this points to a fundamental problem: people who don’t trust others won’t be so quick to talk on the phone about their voting preferences. But they do vote.
Trump – an instigator of distrust in institutions like the media and polling stations – is attractive to that group in particular. A possible explanation for the underestimation of Trump voters.
And while Republicans may have participated less in the polls in 2020, for Democrats it was probably the other way round. Not only was there a lot of political engagement because of the anti-Trump sentiment, but they were simply more likely to be at home because of the pandemic. Pretty handy when you call landlines to ask questions.
I’m looking forward to the more detailed analyses in the coming months. Will this hypothesis hold up? Perhaps an even more important question: Is it wise to obsess over election forecasts? Wouldn’t it be better to ignore such predictions, as technology journalist Zeynep Tüfekçi put it in a strong argument on her blog?
That’s food for another newsletter. To be continued.