Wednesday, March 12, 2014

Tas Labor Push-Polling? Not As Such, But ...

Wirrah Award For Fishy Polling (image source)


In the final week of the Tasmanian state election campaign, the Tasmanian ALP has been accused of push-polling.  This follows the apparent leaking, by forces unknown, of an internal Labor UMR poll.

I have also obtained the contentious poll, which was conducted by telephone interviews with a sample size of 300 voters in each of Lyons and Franklin.  The Liberal Party needs to win three seats in at least one of these electorates, and is very likely to do so in Lyons, but lineball in Franklin.

The questions for the two electorates, conducted 5-6 March, appear in a results section entitled "Messaging".  Each question has the opening question "Does the following statement make you more or less likely to vote Labor in the state election or does it make no difference?"  The statements then are:



"Tasmanian Labor will restore the school kids bonus to help low income families make ends meet, that will be axed by the Liberals and Palmer United."

and:

"Just as Tasmania is starting to pick up, the Liberals plan to axe more than one thousand two hundred jobs, will destroy confidence and see many families without a breadwinner."

The results are presented in a format that looks like this (this one's for Lyons):

In this case, 73% of respondents must have said it would make no difference, or that they did not know.  The figures for Franklin for the same question are 9-21 based on the same format (7% much less likely, 2% little less likely, 14% little more likely, 7% much more likely) for a net +12.

For the second question the figures in Lyons are 9-20 (4-5-12-8) giving +11 and for Franklin 6-21 (3-3-11-10) giving +15.

The charge from the Liberals is that this is "push polling [..]  grubby, gutter tactics to try and launch another negative smear campaign" (Vanessa Goodwin.)

The defence from Labor's John Dowling is that the results show "a large amount" of votes can still be shifted, and that the research indicates that "thousands of voters are more likely to support Labor when they receive this information".

Dowling also said the Liberals displayed a "complete lack of understanding of research methods” and that the claims were based in fact and “the challenge for the Liberal Party is to identify what in the statements is false".

Is this "push-polling"?

In its classic form, the notorious tactic of push-polling is conducted to peddle smear campaigns about the opposition, sometimes in the form of outright falsehoods, under the guise of a survey but with the real sole aim of changing the respondent's opinion.  The push-poll is entirely fake and the results of the push-poll are of no interest to the commissioning source and are typically not published, and in some cases not even recorded.  The push-poll is normally not conducted scientifically, but with an aim of saturation bombing the marketplace with as many calls as possible to get the message out.

However, push-polling is not the only form of polling-type contact that contains negative information.  Another form that I have lambasted here and elsewhere many times is what I call skew-polling, which involves using negative claims or leading-question survey design to distort results; an element of pushing is frequently involved in this process.  A third form is so-called "message-testing", in which a party wants to find out whether hearing a given claim makes voters change their votes, prior to releasing that claim more widely.

Message-testing polls often come under scrutiny because of their clear similarity to push-polls, but they're not strictly the same.  A message-testing poll will generally be conducted openly, with the company conducting it named.  The samples will be of a normal size, and the sampling process will be conducted randomly and competently. The information will be used by a party, albeit usually for the purposes of deciding which of a number of possible negative advertising tactics to run.

Message-testing polls are not push-polls as such but the line is frequently blurred, and the worse end of the message-testing spectrum has exactly the same effect as a push-poll, on a smaller scale.  That is that the respondents receive claims of a negative nature in the process of conducting what they think is an opinion poll.  That it actually is an opinion poll that is used by a party to assist it to attack its rivals - as opposed to a fake opinion poll for the same purpose - makes some difference to the ethics of it (in that at least the voter is not being lied to about the purpose of the survey) but not necessarily that much.  The pushing-type impact of message-testing polling raises similar issues if the messages are negative, false or misleading.

The American Association of Public Opinion Research discusses the differences between a classic push-poll and a typical message-testing poll at some length.  The current poll, on the leaked information available, ticks most of their boxes for a message-testing poll, and some for a push-poll.   The most obvious objection to it having pure push-poll status is the very small sample size.

The crux of the matter is this:

"Issues in Message Testing

Despite their legitimacy of purpose, message-testing surveys occasionally generate vigorous complaint. They are sometimes the subject of public controversy in political campaigns, and may appear in press stories about dubious campaign practices. AAPOR recognizes that message tests may need to communicate positive or negative information in strongly political terms, in a tone similar to campaign advertisements. Still, these surveys should be judged by the same ethical standards as any other poll of the public: Do they include any false or misleading statements? Do they treat the respondent with fairness and respect?"

An article by US pollster Mark Blumenthal on push polls vs message testing despairs of the dumbness that can take over when debates of this kind are reduced to arguing about whether something is a push-poll or not.  I could quote large chunks of Blumenthal's article but I think this one will suffice:

"The brain-dead way to approach these stories is to argue over whether the calls amount to a "push poll." As a campaign pollster, I helped design hundreds of surveys with similar tests of messages. So trust me when I say that all campaigns -- including the Obama campaign -- test positive and negative messages in their surveys. As I've written many times before, conducting a message testing poll does not absolve the pollster and the campaign from ethical obligations. The issue is not whether the pollster is trying to "push" the opinions, but whether they are telling the truth and treating their respondents with fairness and respect.

They way I wish reporters would approach these stories is to focus less on the "is-this-a-push-poll" angle and more on evaluating and debunking the charges they include."

In conclusion, it is not a push-poll as such, but it does engage in pushing for the sake of market research, and this raises some similar issues.

Are the poll messages legitimate?

The first question claims "Tasmanian Labor will restore the school kids bonus [..]".  The schoolkids bonus is a welfare payment, expected to be abolished by the Abbott Government but not yet abolished.  It presently pays $410 a year for primary school children or $820 for secondary school children, subject to receipt of Family Tax Benefit A.

Labor has pledged to "honour" the bonus, but it seems that it will do so at a lower level, since Labor is offering $100 a year for an eligible student, increased to $200 if they record a 95% attendance rate.  As such, it does not appear to be even close to a full restoration and the question seems somewhat misleading.  However, the misleadingness is at least confined to a positive policy rather than an attack.  (Note: I have not yet been able to identify for sure whether PUP support the removal of the bonus, though they do support rescinding the Mineral Resources Rent Tax with which it came in.)

The second question claims the Liberals intend to axe more than 1200 jobs.  There is certainly keen interest in just how many jobs the Liberals intend cutting, given that they have a known policy to cut the size of the public service by the equivalent of 500 full-time positions.  They also have policies to increase employment by over 300 in nursing, teaching and policing combined, suggesting cuts of around 800 positions in other areas might be needed (a sum Will Hodgman has been very reluctant to enter into in interviews.)

The source of the 1200 jobs estimate is none too convincing: "public sector unions" (who, whether independent as claimed by the Premier or not, have an obvious bias in the matter.)  Even if it is accurate, the wording of the poll reads as if this involves the movement of employees from employed status to unemployed status ("without a breadwinner"), when the 1200 estimate includes part-time positions.  Then there is the "will destroy confidence" (which is simply an ambit opinionative claim.) So this is basically a highly contestable attack-scare message by Labor, albeit no more dubious than other things said by many other parties in the course of any campaign.

Of course it's quite reasonable for them to want to test if it works.  The extent to which such testing is ethical, or amounts (intentionally or not) to pushing by another name, depends on whether appropriate disclaimers were made to the effect that the statements were just claims and not necessarily facts.  So far there has been no indication concerning what subjects were told before they were asked the question.

Is the research useful?

And here we come to what I see as the biggest problem with Labor's defence.  Even if we conclude that this research has not a whiff of push-poll about it and was entirely appropriate and fair, a core problem is Labor's fondness for hypothetical polling despite the overwhelming evidence that it does not actually work.  Probably a lot of Labor's electoral pain both state and federal in the past four or five years has come from a culture of belief that hypothetical polling is a lot more useful than it is.

Suppose I told you that I was researching a possible change to a product that I thought would make people feel more comfortable using it.  I had done a market research survey and asked people whether this change would make them more likely or less to use the product, and the results were as follows.  2.1% said they would become more more likely to use the product, 7.2% somewhat more likely, 2.1% somewhat less likely, and 25.1% much less likely.  Now, if you were following the approach of this sort of poll, you might report that the proposed change had a net rating of -17.9 and therefore was a terrible idea; if the change was implemented use of the product would drop.

These results are in fact real results from a poll by British researcher Survation, a hybrid pollster using a similar range of methods to Morgan.  The research question was whether people would be more likely to take currently illegal drugs if they were legally available.  A quarter of respondents said that this would make them much less likely.  When the survey was broken down by previous drug-taking experience, it turned out that the net rating for the change among those who had never taken the drugs in question was -26.1, but among those who had at some stage taken them it was +1.8.  This was despite 11.4% of those who had taken drugs in the past (presumably those who had given up plus a tiny minority of jokers and seekers of illicit thrills) also saying they would be much less likely to take drugs if they were legal. 

What happened in that case was that many of the respondents didn't answer the question.  They said "much less likely" when in fact they had no intention of ever taking drugs (or ever taking them again) in the first place and their real response should have been that it would not make a difference.  The urge to dismiss drugtaking utterly (including a tactical desire to oppose its legalisation) overrode any interest in providing a careful, sensible, rational response to the polling question.

To get a sensible answer to the Survation drug question, the pollster would have needed to ask some screening questions first.  They could have first weeded out those who would not consider using drugs whether they were illegal or not, and also those current users who said that they would not consider quitting in either case.  Results from the rest of the sample would have been more useful and would almost certainly have shown that there were some prospective drug-users (who may or may not have partaken before) who were scared off using drugs by their illegality or by consequences of that illegality such as price or safety issues.  A few liars would still sneak through the net, but on the whole the results would be cleaner.

That is an extreme example but similar problems apply to all hypothetical polling.  Gung-ho locked-ins for either side of a party divide will say that a policy makes them more likely to vote X simply because X is their party - when in fact they will vote X whether X has that policy or not.  Also, why should a statement that the Liberals intend chopping 1200 jobs and end the world as we know it make anyone less likely to vote Labor?  Yet several percent of voters, presumably locked-in Labor-haters (or voters expressing a backlash at a scare campaign though it actually wouldn't change their vote either way) said that it did.

There's a good roundup of the deficiencies of most hypothetical polls on the UK Polling Report website here. Voters when asked if a single issue will affect their vote, are placed in an artificial situation in which they consider that issue by itself.  In reality they will consider many, with certain big-picture issues dominating.  Undue prominence magnifies the measured impact of an issue, because the voter will over-report how much difference it makes to their intention.  This especially applies when the issue is one that is framed so as to encourage an instinctive "er, yuk" response to one side of the question or other.

Not only are some voters also flatly dishonest (or too emotional to think rationally) in responding to these sorts of questions, but voters have also been shown again and again to be poor at predicting how their vote will actually change.  Leadership polling in which voters are actually asked who they would vote for if X was leader are classic examples.  In the leadup to the return of Kevin Rudd as PM, all such polls overestimated the vote Labor finished up with.  They more accurately presaged the height of Rudd's honeymoon bounce, and some overshot even that. 

The schoolkids bonus question is a bit like the "Would you like the government to give you a pony?" example mentioned in the UK Polling Report article.  It sounds great, but if the question was really "Tasmanian Labor will spend $14 million to replace less than half of the schoolkids bonus being axed by the federal Coalition" then people would react more equivocally.  Of course, the point of this particular poll is to predict how voters hearing only a slanted oversimplified message would respond to it.  But a large percentage of voters will not even receive that message and those who do receive it will often do so against the context of a backdrop of criticism of the government pushing the policy in question.

My experience of hypothetical questions of this kind is that just about anything you can think of that might be seen as a positive for one side, no matter how specialised or minor the issue actually is, is capable of drawing a response of 20% or so saying it would be more likely to make them vote a given way.  So a positive response doesn't prove anything much about what works or what doesn't, because just about any issue might create a false positive.  Indeed, the findings of these polls are actually surprisingly small hypothetical effect sizes.  To get the real effect size, you divide the claimed one by at least five, and then look at what you are left with with a great deal of suspicion, because a lot of policies that test positively in this sort of question actually impact on hardly anyone when it comes to the ballot box.

Recently Labor threw the Greens out of cabinet and claimed to have internal polling showing this decision would increase its vote by 3-4 points.  Yet this was clearly a failed prediction.  The message of such subsequent polls as have been released (compared to those taken before) is that if any party has gained votes following this decision it has been the Greens, and that Labor has if anything lost votes following it, and certainly not gained.  If Labor outperforms current polling by 3-4 points or more when the votes are counted, this won't mean that the benefits of turfing the Greens accrued suddenly in the last days of the campaign after months of not raising a blip.  It will just mean that all the polling was wrong and consistently underestimated the party's baseline vote (edit added post-election: or that there was a late swing for some other reason)

If Labor really think that any specific negative message they will attempt in the saturated environment of the last week of the campaign has the potential to swing big numbers of votes then I suspect they are clutching at straws.  They have little to lose by throwing whatever mud they can find (within reason) at this stage, but past ailing Labor state governments have tried the same thing, doubtless following similar testing, and only dug their own holes deeper.  If Labor were really so successful at using internal polling to read the public mood and identify winning strategies, I don't think they'd be where they are today.



2 comments:

  1. Do these robocalls allow the organisation making them determine the average time the receiver stays on the line to listen to the message? It would obviously be a useless exercise if, say, 80% of people hung up within the first 5 seconds?

    ReplyDelete
  2. One thing I would wonder about is why Labor didn't get this "message polling" sorted out well before the campaign, to try to find a few lines to run with hard during the past month.

    Seems a bit pointless waiting until the final week of the campaign before testing their messages, if that's really what they were trying to do.

    ReplyDelete

The comment system is unreliable. If you cannot submit comments you can email me a comment (via email link in profile) - email must be entitled: Comment for publication, followed by the name of the article you wish to comment on. Comments are accepted in full or not at all. Comments will be published under the name the email is sent from unless an alias is clearly requested and stated. If you submit a comment which is not accepted within a few days you can also email me and I will check if it has been received.