Why don't PR Awards walk the talk on evaluation?

The Barcelona Declaration of Measurement Principles for public relations have been discussed and even endorsed by professional bodies and industry publications around the world – but there’s little evidence of “walking the talk” if you look at the Award programmes they run.

In particular, use of AVE is neither prohibited nor penalised whilst a lack of clear, measurable objectives (ideally based on research) as noted in a post by Sean Williams, compounds the limitations of Award entries to reflect professional best practice.  This is a huge concern for those of us in education, since Award winners should be viewed as exemplar case studies – but are more often only of value to be critiqued.

It is hard not to believe that the PR profession follows the Lewis Carroll quote:

“If you don’t know where you are going, any road will get you there.”

So campaigns can be deemed a success simply because a vague destination, ie coverage has been achieved – backed by calculations of AVE or spurious “PR Value”.

The Principles reflect a solid business basis to assess the value of PR – why is this so hard to reflect in campaigns that are worthy of considerable client budgets, let alone recognition by the profession as Award winners?

Just because outputs are easier to control and measure than outcomes, does not make this acceptable practice and certainly not in Award entries.  Best practice Award winners have to be those that ensure budget is allocated for pre- and post-campaign research – otherwise how can such activities claim to increase awareness, change attitudes or influence behaviour if no benchmark or resulting measures are recorded?

Many companies do actually undertake the type of evaluation called for by the Barcelona Principles. Attitude and behavioural research is used to assess marketing of new products, to mystery shop retail experiences or to gather employee feedback. Public opinion is routinely assessed on a range of issues.  Surely there is a similar opportunity to research the impact of PR campaigns with a wide range of stakeholders – based on solid foundations that also champion research into the problem or situation facing any organisation for which PR is argued as the solution.

Rather than PR research being a byword for questionable surveys to generate media headlines, it is about time it reflected a standard of planning evident in other management disciplines.  There are many useful free resources available online, for example, at the Institute for Public Relations site which provide insight into setting objectives, measurement and benchmarking.

We can keep on giving excuses about why AVE and other approaches continue to dominate in PR – or those in charge of putting the spotlight on consultancies and practitioners in Award programmes can get tough and only accept entries that reflect best practice in research and evaluation.

I know what I’d like to see…

Please follow our blog:

12 Replies to “Why don't PR Awards walk the talk on evaluation?

  1. Brilliant post, Heather, thanks. I couldn’t agree more. I’ve judged both IABC (Toronto) and, recently, PRSA (Washington), and, though there is some fantastic, creative, deserving work, the wheels certainly fall off when it comes to the formative research and post-campaign evaluation section of the submission. It is very frustrating. Encouragingly, the judging criteria for PRSA (Washington, anyway) was set up to discount submissions that hadn’t adequately addressed research or measurement. I imagine most sets of criteria include research and measurement. My concern is the extent to which and how consistently the judges adhere to them. 🙂

    1. Spot on Heather. I’m so glad you posted on this because I’ve thought this for ages and was starting to think I must be missing something!

      (In a shameless plug I’ve just posted about outputs and outcomes here http://goodgreenpr.blogspot.com/2010/09/time-to-spend-wisely-on-communication.html)

      I think you also correctly identify the answer. Measuring and reporting seemingly impressive numbers is so much easier than finding out what the underlying effects are.

      I’m also grateful to the two Js, and will be reading up a bit more on MRP.

  2. Award schemes, like professional education, sponsorships and special projects are, beyond membership dues, the major revenue streams for professional associations and their directors or ceo’s, are conservative and tend to resist pressures to change for fear of upsetting the golden goose.

    I recall a study presented some years ago in Bled by a german scholar (Barbara Bearns?) demonstrating that award schemes stifled innovation…. very convincing…

    Heather’s call is based on the same argument: if it is true thta awards stimulate conformism, then at least use this for desirable conformism….

    1. Toni – that study sounds interesting (couldn’t find it on the Bled site or online unfortunately). I hadn’t thought about award schemes as stifling innovation – but I suppose the argument is that people focus on the idea and not the rationale as why something is (or should be) thought of as exemplary. That’s probably why we end up with so many copycat campaigns – with people failing to realise that just because something worked before, for another organisation, which may or may not have faced the same issue, then it must work again.

      I’m not necessarily arguing for conformism beyond evidence of a basic management process of determining what needs to be achieved, explaining the how and why of the solution that was executed, and then demonstrating why it was successful in addressing the original objectives.

  3. Hi Heather,

    You are spot on and this point is even more valid as we move into the next couple of months when brands/organisations will be setting their budgets for the next financial year.

    Those marketers and PR professionals who can’t draw a straight line between their communications and their organisaton’s business objectives (usually money in the business world) will be given short shrift.

    We are increasingly seeing this wooly application of evaluation in the age of social media with marketers claiming numbers of Twitter followers, unique visitors on a website etc etc is – in and of itself – an outcome.

    I write about this here: http://t.co/lyWZhaJ

    1. Michael – I couldn’t agree more. With the Award categories mentioned in reply to Stuart, the social media evaluation was just as you’ve indicated – followers etc. Even in the case of counting followers, there was no attempt to remove duplications because the bigger the number the better. So one Twitter account could have been counted dozens of times if the person follows a lot, or if the Tweet was spread around a fairly closed group of contacts. Likewise, there is little analysis of Facebook groups – let’s just count the number of people joining, regardless of what their motivation or level of engagement may be (or not).

  4. Heather, are you familiar with the Media Relations Rating Points system that CPRS developed (mentioned by Janet above)? From the CPRS website:

    “MRP is the new standard for evaluating editorial media coverage in Canada. Developed by the CPRS Measurement Committee, this new system is designed to make it easy for communications professionals to measure, evaluate and report the results of media relations campaigns.

    The primary objective of creating the MRP system was the development of a simple, standardized reporting system that can be widely accepted and utilized with ease to measure any type of editorial coverage (i.e. print, broadcast, online) stemming from proactive media relations campaigns, crisis communications or unplanned media attention. The MRP system includes a media report template, rating system and tool for obtaining up-to-date accurate reach numbers.

    The MRP system analyses editorial media coverage by tone, customized criteria and cost-per-contact. Key to the MRP system is the reach data, which is being provided by News Canada via a subscription internet service. News Canada won a competitive CPRS Request for Proposal (RFP) competition in November 2005 and is the sole authorized supplier of reach data for the MRP system.”

  5. Timely post. Beyond the awards ceremony is reality. Demonstrating ROI is no longer an option for the PR profession but a requirement of clients and businesses.

    I am a member of CPRS and IABC and seeking accreditation with both organizations. I want to ensure value for my clients based on a well-rounded foundation of industry best practices.

    The two professional communications associations Canadian arms have jointly endorsed MRP2, a consistent metric for standardized reporting of editorial media coverage and ROI. Can we also bring down the other silos between us and work together on measurement and evaluation? IABC currently leads in this area.

    This is a time of sea change in professional communications with social media impacting our work in ways previously unimaginable. I think we have more in common at this juncture in time than differences. The mutual gains of sharing best practices can only enhance our ability to perform excellent contemporary communications management.

    1. Janet – thank you for pointing this out (and Judy for explaining further). It is useful to see a drive for a standardised measure (which seems to also allow for input of objectives so not just assessing coverage in itself). However, we should remember that such tools are still only looking at outputs and as such are not really evidence of “excellence”. The circulation (reach) of a publication is only an indication of the potential number of people that may have read an article (for example). It doesn’t tell us how many did read it, whether they understood, recalled or acted on it. Also, it is important to understand the people with whom we need to engage and hence look at what media specifically reaches them and other factors such as engagement/influence, rather than just hitting mainstream (often mass) publications and presuming that enough of our public(s) will be reached.

  6. Couldn’t agree more. I’m judging two sets of awards this year (PRCA – shortlist published tomorrow) and SOME Awards (deadline for entries also tomorrow). It was distressing and disappointing to see so many entries use AVEs as a criteria. However, those that did use AVEs could be penalised in several of the scoring categories – evaluation was obviously weak because they used AVEs; professionalism was obviously weak so they so they automatically lost points in two key categories.

    I’m part way through a post along similar lines to yours and will link to this once I’ve finished and posted it.

    1. I was a judge for the CIPR Awards (Automotive category) earlier this year and was pleasantly surprised that some entries didn’t mention AVE – including the Hyundai Scrappage scheme selected as the winner which could demonstrate measurable business benefits from the campaign (in that case direct sales). At the presentation stage, the justification for its use was stated as client or marketing colleagues wanting it. This is an excuse for not engaging in a more intelligent conversation to point out that it isn’t even a real measure, let alone a useful one.

      I have also just been involved in PRCA Awards where I think all the entries mentioned AVE (and even PR Value) That makes it hard to penalise any entry since all become weak in the same areas. Ultimately, the winner in this category will probably have cited AVE and that will likewise be listed in the write up published by PRCA.

      I think that PRCA, CIPR, PR Week et al should just tell entrants that AVE (and PR Value) are not acceptable measures and any submission including them will not be considered by the judges.

Comments are closed.