Transcription – Daron Shaw Interview

Q:              Our camp — whatever it may be, when a campaign takes a hit as a result of an event, are there ways of adjusting the things that you’ve done [01:01:00] most of your work on?  Are there ways of adjusting which states now are more or less in play?  Which media markets are more or less efficient?  In other words, are there ways of, in the middle of the fight, calling audibles and saying, well, now we need to put more money here instead of there to get out of this, probably create for ourselves, or do you send the president here instead of there to accomplish the same goal?

SHAW:       Yes, I think in 2000 and 2004, they were fairly crude.  And what I mean by that is that we were doing the tracking polls, and so, you would look to see if there was any differential reaction across your battleground states, and then the markets within your battleground states.  And so, maybe something played in Peoria, but it didn’t play in, you know, Reno.  I think they’re much more sophisticated about that now, in that they are more aware of the nature of the demographic and political characteristics of the persuadable and mobilizable voters within each media market. [01:02:00] We were taking broad snapshots, just aggregate reactions within the media markets, and I think they’re better than that now.  So, the short answer is, yes, you would always be looking to make adjustments and audibles to slightly increase the advertisements or potentially add a visit in a place where the data suggested you were hemorrhaging voters.

But I think nowadays, the information they have from the voter lists is much more sophisticated.  So if you get polling information, even nationally, or within states that suggest certain sorts of voters have been particularly bothered by something the president said, you know that certain markets have those characteristics in abundance, and can make adjustments, and you may even have heard in 2008 and 2012 that there’s sufficient feedback, sufficient numbers coming back from the voter list that you could actually get raw empirical data on that to verify whatever assumptions you had.  So, we did a little bit of that in ’00 [01:03:00] and ’04.  I think they do a lot more of it and a lot better job of it now.

Q:              The exit polls turned out to be wildly off in ’04.  Can you account for that?

SHAW:       There was a —

Q:              And do you think it had — the rumors of those polls that sort of made their way onto the Drudge Report and elsewhere during the day, do you think they had any effect on the actual voting?

SHAW:       In 2004, I remember the Drudge or other people had leaked you know, the initial exit polls, and there’d been a few analyses after the fact.  I think they tend to move in contradictory directions.  Some people think that well, you know, the rumors where Kerry had a two-point lead, and that would elate the Kerry supporters and depress the Bush supporters.  Well, it’s possible that happened.  It’s possible the opposite happened, where it made Bush supporters mad or freaked them out, and the Kerry supporters were fat and happy. [01:04:00] I haven’t seen much empirical evidence either way that the distribution of the vote over the course of the day changed much from what one would expect.

I think, you know, what happened, of course, and this is not widely known — and it is amongst public opinion experts, but it’s not widely known amongst the public — and that is, there tends to be a bias to the exit polls.  And this is something that actually very much frustrated me in the aftermath of the 2004 election.  There were a lot of political scientists and statisticians who went to calculate the chance that the exit poll was off by the amount it was, simply looking at the number of people that were involved in the exit polls; in other words, your margin of error as a function to the number of people that you sample.  So, exit polls involve thousands to tens of thousands of respondents, and so these statisticians would come back on the Keith Olbermann show, for instance, and say, well, the chance that the exit poll is off is just infinitesimal; therefore, there must have been massive voter fraud in Ohio.  Well, that’s [01:05:00] a wonderful assumption, if you assume there’s no other error to the exit poll, which is an asinine assumption.  Exit polls, like all polls, have response biases.  Some people are more likely to do them, and some people are less.  And in the case of 2004, what we found was that Kerry people were more likely to agree to do the exit poll than were Bush supporters.  So you get a slight response bias favoring Kerry.

And that’s something we’ve seen — you see in almost every election.  In 2008, for instance, in the Democratic primaries, Obama supporters — even though Hillary supporters were enthusiastic, and willing to take the exit poll, Obama supporters were even more.  So there was always a slight pro-Obama bias to the exit poll, or slight error.  Now, we kind of, after a couple of primaries, you know, the exit polls figured that out and you could make adjustments.  But in 2004, you know, you can’t really gauge the extent to which you’re going to have those sorts of response biases heading in.  You have to sort of see them on Election Day.

[01:06:00] But at that point, the networks didn’t have a quarantine room like they do now, where there’s a lockdown on information until five o’clock.  So the first wave information gets out early.  And I think it’s possible that it had an effect on deploying resources.  I’d heard, for instance, that the Kerry people were sort of having information fed to them on the exit poll, and diverted a helicopter with Jesse Jackson to Philadelphia because they thought they were down a little bit from where they wanted to be in Philly, and so Jackson was actually in the air, and they decided to have him land in Philadelphia.  I don’t know if that’s apocryphal or not.  I hope it’s true; I love the story.  (laughter)

So there’s some suggestion that the election poll actually affected the Election Day allocation of resources, which brings up an interesting point.  Prior to ’04, and then ’08 and ’12, there really wasn’t much of a prioritization or targeting of resources on Election Day, that is, [01:07:00] a dynamic one.  You have your targets, and your active, and — you know how far up or down you are from your margin based on the volume that’s being reported.  But they actually were using the voter list, and real-time updates of the voter list to make allocation decisions in 2008, 2012.  There’s a little bit of that in 2004, and that in fact is when people are talking about the Republican system breaking down in 2012, that’s what they’re referring to, that this real-time feed of data kind of crashed the system.

But that’s — you know, we talk about the strategic targeting.  In the past, it had stopped on Election Day.  On Election Day, you’re just scrambling, it kind of — everybody’s going, you know, a hundred miles an hour.  Even that day has become strategic now, all right.  But as for the exit polls, you know, they were off by a couple points, and they were off particularly in some states.  One of the things that [01:08:00] the national exit poll discovered after the election was that the distance that exit pollsters were forced to stand away from the polls had an effect on these response rates, and the two states where the distance was the largest were Arizona, where they had a significant error because they had to be like 100 yards away, and Ohio.

So, you know, these conspiracy theories about, you know, the vote, I’ve thought are utterly irresponsible.  There are very, kind of obvious explanations for what happened, and part of was mechanical, you know, the way elections are administered in Ohio, and how far away the exit pollsters have to stand away, and it seems to be the case that, you know, Bush people were not as enthused about talking to people with, you know, these network exit poll badges on their chest than were Kerry supporters, and that reluctance was perhaps even exaggerated by the fact that Ohio has distance rules that are slightly more stringent than other states.