The Evidence Base

Informing Policy in Health, Economics & Well-Being
A collaboration with
USC Dornsife Center for economic and social research

Who is the Next President?

Nils Bohr famously said that “Prediction is very difficult, especially if it’s about the future”. I have this from a website at the University of Exeter in the UK, which has many memorable quotes about forecasting. Recently we have seen a stark example, when an internal poll of the Eric Cantor primary campaign showed him leading among likely Republican voters by 34 percentage points; the actual result was a loss by 10 percentage points.

Despite the many stark warnings on the Exeter website against trying to predict the future (for instance: “Those who have knowledge, don’t predict. Those who predict, don’t have knowledge.”-Lao Tzu, 6th Century BC Chinese Poet), election forecasting is big business. And to be honest, it is exciting. It is like betting on the result of a soccer game during the World Cup (as many a worker has done in “office pools” around the world).

So, in 2012 some of us decided to try to see for ourselves how difficult it would be to forecast the popular vote in the Presidential election. The result was “The RAND Continuous 2012 Presidential Election Poll” The name requires some explanation. It was called the RAND poll, since the team that designed and conducted the poll (the authors of the Public Opinion Quarterly article that just appeared) was at RAND at the time. Now we are all at the Center for Economic and Social Research (CESR) at USC. It was called a “continuous poll” as it got automatically updated every night.

Here is how it worked. We used the RAND American Life Panel (ALP) as our pool of respondents. The ALP consists of about 6000 respondents who regularly answer questions over the Internet. The ALP is a “Probability Internet Panel”, which means that respondents are recruited using traditional sampling methods that do not require potential respondents to have Internet access. If in the recruiting stage it appears a potential respondent does not have Internet access it is offered to him or her in the form of a laptop and broadband Internet access. Several studies have shown the superiority of a Probability Internet Panel compared with convenience Internet panel in which respondents are recruited over the Internet in a variety of ways.

In Early July 2012 we invited ALP respondents to participate in our continuous poll. If a respondent agreed, he or she would answer 3 to 5 questions every week. We split the sample in seven equal parts, so that on every day of the week, we would ask one seventh of the panel to answer our brief questionnaire. Approximately 3500 ALP respondents participated, so that every day about 500 respondents were asked the election questions. We averaged the answers of one full week and used that to calculate our forecast. Every day the average would move one day so that answers of 8 days ago were no longer used and the current day was added. The whole process was automated and new results were made available every night at 1 a.m. The website still exists, but obviously we stopped the updating process on election day.

In his famous 538 blog, Nate Silver included the RAND poll and on the Saturday after the election he ranked the performance of the RAND poll fourth out of 23 firms. We were one of only four firms who had overestimated the gap between Obama and Romney. However, that was based on a preliminary count of the votes shortly after election day. By the end of the year, when the final tally was in, the gap between Obama and Romney had widened considerably, and even our poll underestimated the size of the gap (we had forecast a gap of 3.32 percentage points, while the final result showed a spread of 3.85 percentage points). Thus we were off by about half a percentage point, which presumably tied us for first place (Pharos group had predicted a gap about a full point larger than ours).

So why did we do well? The most likely explanation may be “beginners luck”, which may very well be true. However there are also a few innovations that may have helped. The first innovation is the weighting scheme which was quite complex and tried to use as much information as possible, in particular about voting behavior during the 2008 election. A second innovation is that we did not try to define “likely voters”. Virtually every polling firm has an algorithm to determine who is likely to vote and then only uses the answers of respondents who are deemed likely voters. Of course some of the likely voters will end up not voting and some of the unlikely voters will actually vote. Instead we asked everyone about the subjective probability of voting and weighted their answer with that probability. We owe that innovation to a paper by Adeline Delavande and Charles Manski. The same authors also argue that it is better not to ask who someone intends to vote for, but rather to ask for the percent chance that one will vote for a particular candidate. This allows respondents to express uncertainty. We also followed that suggestion.

Whether these innovations really explain our successful performance, remains to be seen. We will need quite a few more Presidential elections before we can say with any certainty whether our approach is indeed better. Or we should move to a country that has elections every half year or so. That would be at least on advantage of a less stable democracy.

Since we moved to USC we have started building a new and improved probability Internet panel. We call it the Understanding America Study. How well we understand America will become a little clearer in 2016. If we fail we can always fall back on another insight on the Exeter website: “Forecasting is the art of saying what will happen, and then explaining why it didn’t! ”