Guest post by Nigel Hawkes.
Healthcare reform is controversial, as both the US and the UK have found. In Britain, a chorus of protest has been generated by a Bill to reform the National Health Service. Some of the most powerful interventions have come from the Royal Colleges – highly-esteemed bodies that exist to promote and improve the practice of different medical specialties.
I’ve been struck not by the positions taken, which are strongly opposed to the reforms, but by the evidence used to support them. The Royal College of General Practitioners asked its 44,000 members through its website whether the college should call for the withdrawal of the Bill. Just 3,120 responded (7 per cent), with 1,760 of them backing the call for withdrawal. The Royal College of Psychiatrists, in a similar poll, achieved an 11 per cent response rate.
It’s astonishing that bodies which rightly call for gold-standard evidence to determine what treatments are best for their patients should cite evidence as feeble as this in their attack on the health reforms. A response rate this low means that only the highly-motivated are bothering to respond, and they are those most likely to oppose the changes. In the RCGP survey, 1,223 respondents skipped the question altogether, which hardly suggests opposition is universal.
Mind you, I’ve known worse polls. The British Medical Association in Scotland made claims about the cost of alcohol-related conditions in general practice in the whole of Scotland based on a loosely-worded survey of just 3 per cent of practices, which selected themselves.
Polls, surveys, and research generally have a seductive attraction for those in PR. There’s no message that can’t be given greater impact by a well-chosen statistic or two. Unfortunately, too many PR professionals are shameless in the way they generate the figures, and too many journalists credulous in the way they report them. Online polling has made “surveys” easily-organised and cheap – who cares if they are nonsense?
Charities are among the worst offenders, apparently believing that the purity of their motives makes up for the inadequacies of their research. The one that has made me angriest recently is Dr Barnardo’s, which published a survey showing, it claimed, that adults believe British children are “feral”, “beginning to behave like animals” or “angry, violent and abusive”. This was based on a survey, conducted by a reputable polling company, which asked adult respondents: “Below are a number of comments made about young people in the UK. Could you tell us how much you agree or disagree with each of the statements?
It then offered three statements. 1. Children in this country are becoming feral 2. British children are beginning to behave like animals 3. The trouble with youngsters is that they are angry, violent and abusive.
All are strongly negative opinions. Respondents were not given the opportunity to respond to any positive views. Why did the charity provide such tendentious leading questions? So that it could present itself in a new advertising campaign as the true champion of children faced with an uncaring world. I can’t think of a better way to give research a bad name.
Sometimes the research conclusions are laughable, but it doesn’t prevent them from getting media coverage. Take the claim made last September by the PR and social media agency Umpf that more than half of UK pensioners use Facebook. Respondents for this survey were solicited by e-mail, so it only reached those who are already online. You cannot measure social media usage using an online method, because – obviously – it excludes those who aren’t online. In response to a question about this, Umpf said “We don’t think this would have skewed the results particularly.” How can they possibly know?
Bad surveys and dodgy polls are, of course, only one way in which research can be twisted. Here are a few more: selective quotation, “cherry-picking” the evidence, careful choice of a starting point for comparisons, making claims of a trend on the basis of a couple of years’ data, plotting graphs that lack a zero on the vertical axis, misleading extrapolations, omitting data points that don’t fit in the hope that nobody notices, choosing the extreme number from a range rather than the most likely number, using means when medians would be more appropriate, “salami-slicing” the data (if the unemployment rate doesn’t fit with what you want to say, look at the unemployment rate for women, or for young people, or for young women, or for young women in part-time work – there’s bound to be one that will satisfy.)
Research is a valuable, powerful tool. Used properly it can amplify almost any message. Misused, it can drag the profession into disrepute.
Nigel Hawkes is a science and health journalist who has worked for The Observer and The Times. Since 2008 he has been Director of Straight Statistics (@straight_stats), a campaign group for the honest presentation and use of statistical data by government, the media, industry and advertisers. He is a columnist and regular contributor to British Medical Journal.