Home » Social Policy » The Woes of Collecting Public Opinion: Lessons from an Outlier Election Poll.

The Woes of Collecting Public Opinion: Lessons from an Outlier Election Poll.

Life lessons when academia meets the press:  What we learned from our experience moonlighting as an outlier poll during a contested election.


By: Tania Gutsche and Jill Darling

In 2012 the team behind the USC Dornsife / LA Times “Daybreak” poll was leading a presidential election poll that was little known outside academia. It was conducted in the American Life Panel (ALP), a survey panel based at the RAND Corporation. That year, we posted election poll data online conducted from July Fourth until a week after the election.

Our online chart of ALP data updated daily at midnight. We promoted the data to our existing data users, but until Nate Silver’s fivethirtyeight.com site picked us up in his poll rankings, few in the public or media were following our results. Pundits and pollsters deemed the poll an “outlier,” as it consistently showed President Obama with a substantial lead over Republican Mitt Romney.  To account for this, Silver assigned our poll results a +1.5 Democratic bias. But when the election ended and the final votes were tallied, our poll was the closest to accurately depicting Obama’s re-election. Our model had passed its first test.

Last July, we began to test the model again, this time based at USC, using the Understanding America Study internet panel, and with partners the Jesse M. Unruh Institute of Politics and the Los Angeles Times. We used the same probability questions, pioneered by Manski and Delavande, as we did in 2012, and the same structure – asking questions weekly and posting an estimate each night that reflected a seven-day rolling average. In the spirit of transparency, we posted our data files and documentation online so that other scholars – and members of the public – could analyze or reweight our results any way they liked.

Once again, our poll was an outlier. While other polls often showed Clinton with wide leads, ours consistently showed Trump ahead. Citizens, pundits and scholars skeptical of the results bombarded us with emails, tweets, and editorials. Some demanded that all of our funding be taken away by Gov. Jerry Brown (our department has no funding from the State of California). Academics questioned our methodology.  Green and libertarian party leaders asked us to include Johnson’s or Stein’s name in the poll (we used the same questions as 2012 offering a “someone else” option). We were accused of political bias by columnists, bloggers, concerned citizens, and even some members of the USC community. Some even suggested (erroneously) we had ties to far right media outlets such as Breitbart News, or the Trump campaign itself.

Istock by Scukrov

USC’s media relations kept many complaints at bay, and no one from USC’s administration ever officially discussed the controversy with us- supporting our academic freedom, free speech and the integrity of our research.

We were the target of so much negative press, with such highlights as “WTF is up with the USC/Dornsife/LA Times Tracking Poll” (Daily Kos) that Silver actually posted an article asking people to leave us alone, defending our poll and its nontraditional approach while assigning our results a +6 Republican bias.   Calbuzz called the Daybreak poll a “World Class Flapadoodle” and claimed that the “LA Times/USC “Daybreak Poll” Dishonors the Paper.” A New York Times’ Upshot headline asserted that our “risky” and “unusual” weighting methods were distorting national polling averages and nearly identified one of our respondents. Amid the rain of criticism were occasional messages of love, including from Trump himself who tweeted that it was a “great new poll.” Our outlier was like the New York Yankees of polls –as loved as it was hated.

As the initial shock of the election results subsided, everyone remembered that we actually had shown Trump ahead. We were inundated with calls and constant requests for interviews. Everyone wanted us to explain “the demise of polling” and how we got it right.

As we have said many times, we did not actually get it right. We had collected something more akin to the prospective popular vote, which Clinton actually won by more than 2 points (our poll showed her losing by 3.2 percentage points). However, the Daybreak poll captured something others did not: a level of support for Trump among voters – not just those identifying themselves as strong Trump supporters, but minorities and high-income individuals – who, to the mainstream media, seemed unlikely backers of the businessman. Why?

We have a reliable probability-based sample. We have a question format that has been proven to get accurate results. We have an online panel, which lowers social-desirability complications (see Jill E. Darling’s recent blog post).  And we’re hiding nothing. All of our data was available to the research community for their own analysis during the election, and we will be releasing even more data and documentation in the coming months. We will publish and present the results of our post-election analysis to share what we have learned with other election researchers. The Daybreak Poll may not have all the answers, but the future of polling certainly includes transparency, innovation, and learning from past experience.

The election is over, but our research on public opinion will continue. We’ll see you again in four years.  Or sooner.



Interested in more posts like this? Just sign up for our newsletter.

Want to share this post? Just use one of these:

Keep in touch

Follow CESR on Facebook



Follow Schaeffer on Facebook