In 2016, pollsters predicted a landslide victory for Hillary Clinton. In 2020, they dramatically overestimated Joe Biden’s support.

You knew that. But did you know that pollsters’ performance in 2020 was the worst in decades—and did you know why?

Josh Clinton, a Vanderbilt University political scientist, led the American Association for Public Opinion (APPOR) task force on the 2020 polling. He concluded that “polling error” in 2020 was the worst “in 40 years for the national popular vote,” and the worst in at least twenty years for the “state-level presidential, senatorial and gubernatorial” votes.

And there was no silver lining. The 2020 pre-election polls were badly in error whether “conducted by phone or online and for all kinds of samples.”

Start your day with Public Discourse

Sign up and get our daily essays sent straight to your inbox.

We are now just a few months out from the 2024 presidential election. We are once again awash in pre-election polls and presidential horse-race punditry and reportage. But there is still time for pollsters and the media outlets that publicize polling results to do themselves and the rest of us a favor: come clean regarding how polls work, what a good poll can and can’t tell us, and why good polling has become so rare.

The Keys to Good Polling

Certain polling techniques are complicated, but any good poll has three basic features. First, it’s based on a random sample of an entire population: any given voter, consumer, or other soul has an equal chance of being interviewed. Second, it measures the difference between the results of random samples taken at the same time; for example, if one poll shows that 55 percent of all Americans approve of how the president is handling the job, and another poll taken at the same time shows that 50 percent do, the sampling error is 5 percent. Finally, it words questions fairly, not using ambiguous or loaded language or indicating what the “right” answer is.

The keys to good polling have never changed. What has changed, however, is how difficult and expensive it is for pollsters to interview a large enough number of randomly selected individuals to ensure that the poll’s margin of error (MOE)—how much the opinions of the sample differ from what the results would have been had they interviewed the entire population from which the sample was drawn—is slight.

I’ll explain more about MOE below. For now, the important point is that, as it became easier for people to screen out unwanted phone calls and texts, polling response rates plummeted. For instance, between 1997 and 2018, response rates to Pew Research Center polls fell from 36 percent to 6 percent.

Pollsters must now contact hundreds and hundreds of people in order to land just one respondent. That can be very expensive, so many pollsters have economized by using smaller-than-ideal sample sizes, under-sampling hard-to-contact people and subpopulations, or experimenting with novel ways to cobble together a sample.

For instance, in May, a Fox News poll sampled 1,126 registered voters, 122 of whom were reached via landlines, 700 of whom answered cellphones, and 293 of whom received a text and then completed the survey online.

Old-school polling purists don’t like such mix-and-match sampling methods; but, at least in my view, they’re still better than “credibility interval” polls, with samples based on Bayesian statistical models and “nonprobability opt-in online surveys.” (Don’t ask.)

Whack-a-MOE

But even the best poll is more like an imperfect roadmap with an immediate expiration date (streets and highways might change anywhere from a little to a lot the day after the map is printed and in hand) than a new GPS system that automatically updates and promises final-destination precision.

For example, for a poll’s margin of error (MOE) to be only 3 percent, a pollster must interview approximately 1,100 randomly selected individuals. Say that a good poll finds that 44 percent of likely voters favor Trump and 43 percent favor Biden, with a MOE of 3 percent, notated as +/- 3.

But what does “+/- 3” mean? It means that if the votes were cast when the poll was taken—a poll says nothing about any time before or after the poll was taken—then the vote probably would have been in the range of Trump 47 percent (add 3 to 44) to Biden 40 percent (subtract 3 from 43) and Biden 46 percent (add 3 to 43) to Trump 41 percent (subtract 3 from 44). The bottom line is not a one-point spread favoring Trump; rather, it’s a twelve-point range stretching from Trump up by seven points (47 percent to 40 percent) to Biden up by five points (46 percent to 41 percent).

Real Clear Politics (RCP) is considered the gold standard for reporting polling results. RCP reports spreads rather than ranges. For example, on June 16, 2024, RCP summarized the ten most recent national Biden vs. Trump polls. Four polls had Trump and Biden in a “tie;” one had Biden up by one point; another had Biden up by two points; four had Trump ahead by one or two points; and one had Trump ahead by five points.

The latest poll of that bunch, the NPR/PBS/Marist poll, was reported by RCP as Trump 49 percent to Biden 49 percent, zero spread, a “tie.” But with its MOE of +/- 3.8, what that poll really told us was that if the election had been held on the days when the poll was conducted, the result would have been in the range of Trump over Biden by 7.6 points and Biden over Trump by 7.6 points, representing a 15.2-point range.

Likewise, RCP reported the Forbes/Harris X poll showing Trump with 51 percent and Biden with 49 percent as “Trump +2.” Actually, however, with its MOE of +/- 3.1, behind that poll’s spread of two points was a range stretching from Trump by 8.2 points over Biden, to Biden by 4.2 points over Trump.

Stop the Spread

That same RCP ten-poll table reported the spreads on two polls (CBS and Yahoo) without giving their respective MOEs. The table below summarizes the other eight polls and reports both their respective spreads and ranges.

 

2024 General Election Poll Ranges: Trump vs. Biden

 

Poll Dates Sample Trump % Biden % MOE Spread Range
NPR/PBS/Marist 6/10-12 1184RV 49 49 +/- 3.8 0 Trump+7.6/Biden +7.6
Morning Consult 6/14-16 10132RV 43 44 +/-1.0 1 Trump+1/Biden+3
Reuters/IPSOS 6/10-11 930RV 41 39 +/-3.0 2 Trump+8/Biden+4
DailyKos/Civiqs 6/8-11 1140RV 45 45 +/-3.1 0 Trump+6.2/Biden+6.2
Emerson 6/4-5 1000RV 46 45 +/-3.0 1 Trump+7/Biden+5
Forbes/Harris X 5/30-31 1006RV 51 49 +/-3.1 2 Trump+8.2/Biden+4.2
I&I TIPP 5/29-31 1675RV 41 41 +/-2.5 0 Trump+5/Biden+5
Rasmussen 5/28-30 1080LV 48 43 +/-3.0 5 Trump+11/Biden+1

 

Note: RV is registered voters and LV is likely voters.  For more detailed information about how a poll was conducted consult the polling organization’s website.                                     

The first line of the RCP table gave the “RCP Average” spread: “Trump +0.8” points over Biden. That spread number was derived by adding Trump’s tallies in the ten polls and dividing by ten, doing the same for Biden, and subtracting the latter number from the former.

The RCP Average is what’s known as a “poll aggregator” that averages polls’ spreads. Proponents of poll aggregators reason that since polls conducted at different times typically have different sample populations, sample sizes, sub-population weighting protocols, and interview methods or question-wording, averaging their results will more closely mirror the true distribution of opinions or preferences.

To me, that’s about as rational as believing that a hurricane that blows through a junkyard containing spare parts from a dozen different makes and models of cars is more likely to assemble and leave a well-running all-terrain vehicle (ATV) in its wake than you are to find one at an ATV dealership.

To make hay and headlines from the presidential horserace, pollsters, pundits, and talking heads must presume to know which horse is ahead or behind and by how many lengths. But they don’t know. And any “tied” polls they hype as portending a neck-and-neck race in November actually show each horse ahead of the other and portend only a double-exposure photo finish.

Besides, when it comes to betting on how people will vote, or which new TV shows people will watch, the best bet is always the one placed on the best poll conducted nearest to the date when the people choose.

There is no panacea for what ails polling. But greater transparency in polling might help to weed out the worst polls while prompting the best pollsters to discover how to do better in time for 2026 or 2028.

Image by  freshidea and licensed via Adobe Stock.