November 4, 2020

CHANG | Polls Dropped the Ball … Again

Print More

Last night was another error-filled look for pre-election polling after the so-called polling debacle of 2016. It’s still too early to compare all of the results to what the gurus predicted, but several key states and races already appear off the mark, and leaves us to wonder: Can we ever trust polls again?

Monitoring the election results this year seemed even more a game than usual, even though the results of each election would have real implications for Americans across the country. Due to COVID-19 and what is likely going to be higher turnout than usual for an election, the number of mail-in absentee and in-person early ballots was far higher than past years. The predicted “blue mirage” effect showed up en masse. Many of the early votes and mail-in ballots in states such as Texas and Florida went blue, making it appear that Democratic nominee Joe Biden had a slight advantage. As the night got longer, President Donald Trump gained ground in a redshift effect. If voters continue to take advantage of early and absentee voting in the future, we may see more of these patterns that break for a specific party early in the night only to suddenly reverse in a heart-wrenching turn. Unsurprisingly, many of these races are close, and could take until the end of the week for a final tally.

Four key swing states — Florida, North Carolina, Ohio and Texas — that went early for Biden and redshifted towards Trump were also four swing states that Fivethirtyeight predicted to have a higher vote share for the Democrats than the reported vote counts late in the night — even with a 3 percentage point “2016-sized” national error. In addition, several other states were systematically overpredicted for Biden in the Fivethirtyeight model, which predicted a +2.5 victory, +1.8 victory, +1.0 victory, -0.6 loss, -1.5 loss, -1.5 loss for Biden in Florida, North Carolina, Georgia, Ohio, Iowa and Texas respectively.

At the time of writing, these states were -3, -1, -3, -8, -8 and -6 losses for Biden, although only Florida, Iowa and Ohio had been definitively called for Trump by the Associated Press. One of the possible reasons for this underprediction was the higher support from Latinos for Trump and the Republicans: Majority Hispanic precincts were +11.5 R and precincts that were primarily composed of Cuban neighborhoods were +13 R in 2020 compared to the same voter groups in 2016. No matter the reason, though, each of these races represented surprisingly large losses for Biden in ways that pollsters didn’t expect and couldn’t explain.

This pattern of overpredicting Democratic victories wasn’t limited only to the national races, either. The House races in N.Y.-02, my home Congressional district of Ind.-05 and Va.-02 were all rated as toss-ups; Nate Silver had the probability of victory for the Democrats as 57 percent, 50 percent and 49 percent respectively. In each of these districts, the Democrats lost handedly: They ended up as -16, -6 and -5 losses for the Democrats. Sure, the model spit out probabilities and not point spreads, but these outcomes don’t quite look like the outcomes of races that the Democrats even had a fighting chance in.

At the national resolution, many of the forecasts predicted strong Biden victories — even landslides. The Washington Post’s Henry Olsen predicted 350 electoral votes for Biden, The Economist predicted 356 electoral votes for Biden and Fivethirtyeight predicted 348.5 electoral votes for Biden. Each of these baseline scenarios are already impossible: Trump has gained too many electoral votes for these Democratic pipe dreams. 

The underrepresentation of the Trump voter (and perhaps the Republican voter) appears to be a systematic and continuing trend that pollsters will need to adjust to, either with different sampling methodologies or with corrections during the creations of different models. There’s still quite a bit of time to go, and with votes still being counted in Georgia due to a burst pipe and other swing states including Pennsylvania, Michigan and Wisconsin, the early results could be reversed and the pollsters’ predictions confirmed. In a few weeks, we’ll have more information, and only then will we be able to make better hypotheses about why polling went wrong …  again. 

I don’t know when elections became so dominated by numbers and so inundated by polling, models and the quantitative road to 270. From the experiences of the past three general elections, though, we may need to step away from the numbers and adjust our priors before we can trust the numbers again. For the statistics major in me, I doubt I’ll ever be able to put down the polls and go purely by intuition alone. Or worse yet, just leave it up to journalists with the attitude of “we’ll know when we know.” 

I’m that person in your friend group who’s willing to stare at the New York Times needles and question how the probabilities are being calculated. However, I’ve lost a good amount of faith this year. The next models will be viewed through a lens of suspicion, a quantity of salt equivalent to the size of a Terrace salad. Without serious fixes to the ways that polling is conducted and models are constructed for the next election, it will be next to impossible to put faith in predictions before the results are set in stone.

 

Darren Chang is a member of the class of 2021 in the College of Arts and Sciences. He is a columnist in the opinion department and can be reached at [email protected].