PR use of statistics on trial – where’s your evidence?

Guest post by Nigel Hawkes.

Healthcare reform is controversial, as both the US and the UK have found. In Britain, a chorus of protest has been generated by a Bill to reform the National Health Service. Some of the most powerful interventions have come from the Royal Colleges – highly-esteemed bodies that exist to promote and improve the practice of different medical specialties.

I’ve been struck not by the positions taken, which are strongly opposed to the reforms, but by the evidence used to support them. The Royal College of General Practitioners asked its 44,000 members through its website whether the college should call for the withdrawal of the Bill. Just 3,120 responded (7 per cent), with 1,760 of them backing the call for withdrawal. The Royal College of Psychiatrists, in a similar poll, achieved an 11 per cent response rate.

It’s astonishing that bodies which rightly call for gold-standard evidence to determine what treatments are best for their patients should cite evidence as feeble as this in their attack on the health reforms. A response rate this low means that only the highly-motivated are bothering to respond, and they are those most likely to oppose the changes. In the RCGP survey, 1,223 respondents skipped the question altogether, which hardly suggests opposition is universal.

Mind you, I’ve known worse polls. The British Medical Association in Scotland made claims about the cost of alcohol-related conditions in general practice in the whole of Scotland based on a loosely-worded survey of just 3 per cent of practices, which selected themselves.

Polls, surveys, and research generally have a seductive attraction for those in PR. There’s no message that can’t be given greater impact by a well-chosen statistic or two. Unfortunately, too many PR professionals are shameless in the way they generate the figures, and too many journalists credulous in the way they report them. Online polling has made “surveys” easily-organised and cheap – who cares if they are nonsense?

Charities are among the worst offenders, apparently believing that the purity of their motives makes up for the inadequacies of their research. The one that has made me angriest recently is Dr Barnardo’s, which published a survey showing, it claimed, that adults believe British children are “feral”, “beginning to behave like animals” or “angry, violent and abusive”. This was based on a survey, conducted by a reputable polling company, which asked adult respondents: “Below are a number of comments made about young people in the UK. Could you tell us how much you agree or disagree with each of the statements?

It then offered three statements. 1. Children in this country are becoming feral 2. British children are beginning to behave like animals 3. The trouble with youngsters is that they are angry, violent and abusive.

All are strongly negative opinions. Respondents were not given the opportunity to respond to any positive views. Why did the charity provide such tendentious leading questions? So that it could present itself in a new advertising campaign as the true champion of children faced with an uncaring world. I can’t think of a better way to give research a bad name.

Sometimes the research conclusions are laughable, but it doesn’t prevent them from getting media coverage. Take the claim made last September by the PR and social media agency Umpf that more than half of UK pensioners use Facebook. Respondents for this survey were solicited by e-mail, so it only reached those who are already online. You cannot measure social media usage using an online method, because – obviously – it excludes those who aren’t online. In response to a question about this, Umpf said “We don’t think this would have skewed the results particularly.” How can they possibly know?

Bad surveys and dodgy polls are, of course, only one way in which research can be twisted. Here are a few more: selective quotation, “cherry-picking” the evidence, careful choice of a starting point for comparisons, making claims of a trend on the basis of a couple of years’ data, plotting graphs that lack a zero on the vertical axis, misleading extrapolations, omitting data points that don’t fit in the hope that nobody notices, choosing the extreme number from a range rather than the most likely number, using means when medians would be more appropriate, “salami-slicing” the data (if the unemployment rate doesn’t fit with what you want to say, look at the unemployment rate for women, or for young people, or for young women, or for young women in part-time work – there’s bound to be one that will satisfy.)

Research is a valuable, powerful tool. Used properly it can amplify almost any message. Misused, it can drag the profession into disrepute.


Nigel Hawkes is a science and health journalist who has worked for The Observer and The Times. Since 2008 he has been Director of Straight Statistics (@straight_stats), a campaign group for the honest presentation and use of statistical data by government, the media, industry and advertisers. He is a columnist and regular contributor to British Medical Journal.

10 Replies to “PR use of statistics on trial – where’s your evidence?

  1. When Harold Lasswell, some 60 years ago, made the statement: Who says what in which channel to whom and with what effects? – he was advocating that people needed to understand the “stream of influence that runs from control to content and from content to audience”. Never has that cautionary statement been more true than in the case of modern use of statistics.

    There are a three things that worry me in particular:
    1. The general ignorance of PR practitioners, journalists and other decision-makers, gatekeepers and influencers in respect of research and statistical underpinnings (I’ll say nothing of the inability for these and the general public to understand qualitative research which I feel is at an even lower level)

    2. Presentation of statistics to the public reflecting simplification and absolutely zero attempt to contextualise, question or otherwise indicate that such data is not reflecting an absolute truth.

    3. The infographic trend that conflates and further distorts statistics. Whilst entertaining and offering an easy way to look at data, most infographics make little attempt to examine original sources and by a process of extrapolation and other statistical errors, even what may have some value as a number gets twisted and re-presented, then combined with other data (with which it is probably not compatible ) to create a sound bite numerical representation that is then further communicated with even less context.

    What SatNav does to geography and human understanding of place vis a vis the external environment, so the use by PR people and journalists of data (helped in no small part by the clever infographic) removes human beings from their understanding of mathematics and numerical representation of the world. IMHO!

    1. Heather: Interesting you quote Lasswell’s much criticised “direct effects” comms model — also variously described as “silver bullet” or “injection-model” which presupposes direct effects of media messages and the poor “receiver” at the other.

      An alternative perspective sees the receiver as an intelligent reader who consumes media messages from his/her own given circumstance, context etc — also called “uses and gratifications” perspective. Further media messages (like in literary theory) can have variours “readings” or interpretations if you wish. These vary from the intended reading to actual reading by the receiver of the message.

      Seen from this perspective, the reader doesn’t need to be “spoon-fed” with interpretations. He or she is quite capable of interpreting given mesages from his/her own perspective and context.

      A bit of a digression, but couldn’t let Lasswell’s historical comms model pass without comment.

      1. Don – I was using Lasswell for cautionary purposes rather than accepting the hypodermic model which is implicit in theories of propaganda for example. The reader is often (although not always) aware of the nature of media messages and hence an active participant in the communications process – agreed – but as discussed here, even with supposedly intelligent journalists, there can be a tendency to accept data without checking the ‘who says’ aspect and what effect they are seeking to achieve with their communications. We may be capable of interpreting given messages – but not if there’s been obfuscation in presenting the source or their intentions very clearly.

  2. Yes, Don.
    One case in point: one major italian daily (corriere della sera) a few years ago announced that a highly reputed research analyst (prof. Renato Mannheimer) had agreed to assist journalists from that paper to read and interpret correctly the tons of research referred materials they received.
    After three months of that exercise the reputed analyst publicly stated he would not continue the assistance.
    Reason?
    he had never in his life seen so much trash and so much bias being presented as ‘scientific’ and concluded by recommending that the paper renounce to using any materials based on so.called research.
    The advice, of course -given the many economic and political interests involved- was never taken up by the journal.

  3. This exciting and stimulating post opens a whole can of worms that relate to our profession as well as to others (scientists, statisticians, market and polling researchers, journalists….).

    Finally…. the overall issue is raised and not by one of us.

    Whatever we, as pr’s have been doing into the twentieth century -in terms of adopting, adapting, creating, manipulating and interpreting data in a wide range of subjects (science, opinions, consumer behaviours, ideas, policies. politics etc…)- pales in comparison to what we are doing now and will probably be doing in the near future to deliver caos directly to publics and stakeholders, as well as through both mainstream and social media (if such a distinction still makes any sense).

    In my view the whole market and opinion research industry (in the sense of the offer and demand of its services) need a radical overhaul and rethinking.
    The common and generalised use and misuse of research data is a major component of misleading and misread data by so-called opinion leaders that shape societal ad well as individual policies, ideas, opinions and decisions and mandates one hell of a huge regulatory issue.

    Only a few of the subissues involved:

    *who is aware that today, at least in my country (Italy) there is an average of 80% refusals to be intervied by polling companies? What sort of agreed upon correction factors are included in the intepretations of the resuting data, when only twenty years ago the refusal rate was stable aorund 30%?

    °who is aware that today the very concept of a representative sample has gone baloney? Representative of what, of whom? When physical mobility has gone up at least 300% in the last ten years, when land phones are hardly used any more and where cell phones are everything but classifiable into traditional socio-geographic patterns?

    °who is aware that the loss of trust in institutions, ideologies, points of reference make it so that individuals change answers to the same question the same day, according to the specific profile they are intepreting in that moment ( for example opinions of urban traffic solutions differ if the same individual in that moment is pedestrian, an automobilist or a public trasport user)?

    °how many organizations (public, private or social…it’s the same) require market or opinion research for anyother reason than just to receive a confirmation to their own opinions and to use three or four powerpoints for the next board meeting to ‘cover their ass”? Nostagia insists that once research was used to modify, create and change products and services and ideas and, only sometimes, also used for pr purposes…

    °how many editors have ever asked their sources to look at the tables of data, to require more integrated analysis before publishing what the release says, manipulated by the public relator on behalf of the interest he/she represents?

    °how many so-called scientists or experts respond over the phone (the number and name received by the journalist from the public relator) and blast off opinions with no hands-on knowledge of the full research exercise?
    I could go on forever…..
    This is truly one hell of an issue for us.
    Your opinions?

    1. As a former journalist on a daily and a weekly trade publication, I now recognize the errors we made (and are still being made when reporting stories backed by statistics from a given source).

      Having said that, one has to take into acount (no excuses here), space, time and deadline constraints that journalists have to take into account when reporting statistically based stories.
      They don’t have the time luxury of an analyst, or expert commentator who have more time to analyse the material.

      Journalists normally get around this by attributing the material to the source (as if this exonorates them from the task of assessing the statistice independently!). The sentence reads something like: Company ABC claims its revenues rose by 100% to… not caring to mention the basis of that calculation

      Fortunately, there are, in every media market, business publications that analyse statistics seriously for readers who need analysis. But for the tabloids, there is neither the space and time to delve into the statistical basis for the claims that are made.

  4. Nigel, thank you so much for agreeing to do a guest post on PR Conversations. This is such an important topic for people in public relations to be thinking about. (As it happens, one of my deliberate, three areas of focus at the recent Social Media Week Toronto event was about using “data.” So this was really timely, complementing what I learned there.)

    Secondly, I wanted to say how could people not adore a post that used both “tendentious” (which I needed to look up) and “dodgy?”

    Thirdly, I wanted to comment that two things that make me crazy.

    1. Taking surveys where the questions asked make it obvious to me what the organization hopes to get out of the compiled answers and information. (Leading questions and rather subjective.)
    2. In the (newer) social media “influencer” rankings, when it becomes obvious that the programmers are rejigging the algorithms so that the “type” of person they want to come out on top, does! (One such monitoring company freely admitted in a blog post that it had reworked the algorithm so that individuals with large followings did not have to “tweet about relevant topics” (to that listing) as often. Ergo, they could be “influential” with much less focus on relevant topics. Whatever.)

    Lastly, a question for you, please: in your informed opinion, is data more often manipulated (or misread) for a POSITIVE or a NEGATIVE result?

    1. Most PR aims to project a positive message, so I guess if that is where research is twisted, it’s in a positive direction. But the opposite may be true for campaigning groups, which include charities, public health bodies, and environmentalists, whose aim is to justify their intervention by making a problem appear worse. For instance, the UK is awash with exaggerated claims about alcohol at the moment. False or overstated claims tend to lead to bad policy.

Comments are closed.