What makes the “best pollsters” of 2024 so accurate?
Primarily it’s a mix of experimentation and biased estimates that get lucky
Note: Yesterday, I added a new benefit for paying members of Strength In Numbers: A private messaging channel on the app Discord. If you use Discord, you can click here to sync up your account with Substack and get access. If you are new to Discord, you can watch a how-to video here. A private space for “founding members” provides streamlined access to me and other polling professionals and data-driven journalists.
The Atlantic had a big piece out about polling in 2024 yesterday that contains some insightful findings and lessons for pollsters and poll-watchers alike. Unfortunately, it also contains some characterizations that I feel, as an expert on polls, are wrong and that are in the public interest to address. And it misses what I think are the biggest criticisms of the conventional wisdom about polls last year (which is not an Atlantic-specific problem, to be fair).
We’ll get to those critiques in a second, but to be clear, I generally don’t like writing articles that can be read as “here’s why someone else is wrong,” and I even had something else planned for today. However, I spent the last week in St. Louis at AAPOR, the annual conference for America’s public opinion pollsters, and in this case I do think an injection of nuance into this discourse is necessary before narratives get out of control. This is a data-driven news website about being smarter about politics with data… so, yeah, today we’re going to have to talk about this piece, and about the polls in general.
My intention is for this to read less as a rebuttal to The Atlantic and more as an opportunity to talk about poll bias and transparency in ways other writers haven’t raised before.
The upshot is this: The popular narrative about which pollsters were right in 2024, and why, and what that means for the future, is wrong — for three big reasons. I even whipped up a quick Bayesian model to crunch some numbers nobody else has, and I think this makes the piece pretty convincing…
But before we get there, let me just respond to one tangential pull quote from the article. The top of the piece contains the following paragraph:
Polls can’t be perfect; after all, they come with a margin of error. But they should not be missing in the same direction over and over. And chances are the problem extends beyond election polling to opinion surveys more generally. When Trump dismisses his low approval ratings as “fake polls,” he might just have a point. [My emphasis added.]
The first three sentences of this paragraph are more or less correct. But the final line is frankly just totally irresponsible.
Trump absolutely does not “have a point” about fraudulent polling data. Remember that Trump’s own personal stated belief, via his lawsuit against pollster Ann Selzer, is that pollsters are systematically engaging in data manipulation to rig elections for Democrats. But no popular surveys showing Trump with a net-negative approval are “fake.” I guess unlike the Atlantic, I think it’s worth stating that clearly in an article about the accuracy of polling.
Trump is not engaging in metaphor about the validity of data or the size of the margin of error, or the size of uniform bias across cycles. He does not have a nuanced understanding of, and is not making an argument about, polling bias and nonresponse adjustments. He is calling the data fraudulent and personally rigged against him.
What’s really going on with the “fake polls” comments is that Trump is cooking up accusations of poll-rigging to deny reality and make it look like he has more support for a mostly unpopular political agenda. Per YouGov:
Ok, fair enough — we all write bad lines sometimes, and I think this isn’t what the author meant. But words matter, and you do not have to hand it to Trump on “fake polls.”
With that out of the way, time to move on to the piece. The three big points are:
The “best” pollsters in 2024 were some of the worst in 2018 and 2022 (so how good will they be next time?)
The “best” pollsters in 2024 dramatically overestimate Republicans compared to other surveys (are they good, or just lucky?)
The “best” pollsters in 2024 are also the shadiest about their methods (if they’ve solved the problem, why won’t they show their work?)
The rest of this post is paywalled for paying members of Strength In Numbers. If you have the means to support independent, data-driven political journalism, please consider purchasing a subscription. You can also use a one-time free trial code from Substack if you read this on the app, or email me if you really want to read and you’ve used your code already.
Update: Several readers have emailed me pointing out they have been unable to upgrade to a paid subscription, instead getting the error “Customer not found.” If this happens to you, please email me for support. It’s a problem on Substack’s end that is easily fixable.
Keep reading with a 7-day free trial
Subscribe to Strength In Numbers to keep reading this post and get 7 days of free access to the full post archives.