How to use opinion surveys in public relations

clipboard imageResearch and evaluation are considered by many to be relatively recent concerns of public relations practitioners. Developments include the (recently updated) Barcelona Principles, ‘workflow‘ tools enabling monitoring and analysis of data generated through digital communications, numerous academic research papers and the insight of research firms. Several chapters in the new Future PRoof book, including my own on sustainable professional development, discuss measurement matters such as return on investment. The lack of standardised industry measures is still argued as a weakness of PR, alongside swapping the discredited AVE with “PR Value” (which is rebranding a bogus measure).

My view is that monitoring and evaluation need to be connected to research and objectives to have relevance as measures of public relations performance. So it is interesting to note that Part IV of the 1948 book, Your Public Relations, which we are serialising at PR Conversations, contains just one chapter authored by Dr Claude E. Robinson, President, Opinion Research Corporation, called: How to Use Opinion Surveys in Public Relations.

Born in 1900, Dr Robinson became a pioneer of research. He served in the US Army in World War I, before obtaining Masters and PhD degrees in sociology from Columbia University. He invented a method of measuring listener response to radio broadcasts and published many books before his death in 1961.

The book’s editors, Glenn & Denny Griswold write:

“Probably no man in American has done as much for Dr. Robinson to dramatise the fact that public attitudes can be scientifically measured and that appraising public opinion is the first step in public relations. His Public Opinion Index for Industry provides exhaustive public attitude studies for the guidance of many of our big industrial corporations.”

Dr. Robinson does not disappoint, as he states:

“Research is not an end in itself. It is of no practical value unless it inspires action, guides action, and assays the impact of action. Thus the researcher and public relations expert are collaborators working toward a common goal.”

He advocates “A careful study of public opinion defines the problem. It shows who are the friends and who are the opposition. It reflects the information and misinformation people have, and demonstrates the relations between information and misinformation and attitudes. It identifies the urgent and desires that motivate people. It indicates the positive and negative symbols to which people react.”

Such research is not focused on tactics, but on strategy as if the latter is wrong “even brilliant day-to-day tactics will fail to get results”. Indeed, “with objective data in hand, the public relations man can then go to his superiors and get agreement on the course of action to be followed and secure the budget necessary to carry on the campaign” – an argument that resonates with contemporary PR practice.

In addition to formative research to help guide strategy, Dr. Robinson moves onto pre-testing of public relations materials. In my experience, this stage is one that is more common in advertising and marketing than public relations communications. But “the symbolism chosen for a public relations document may represent perfect clarity to its creator, but be both uninteresting and unintelligible to the reader”. Both “themes and treatment of themes” can be tested to show “what types of stories were most believable”.

Next comes auditing of public relations campaigns. Interestingly Dr. Robinson writes sagely:

“The difficulty heretofore with most attempts to measure public relations impact is that the principals hoped for too much change in public opinion and were disappointed by the actual results. One would not expect to drop a pebble into a pool and see the water level rise three inches. Similarly, an ad or a news article or a booklet cannot be expected to transform opinion overnight. Usually the public changes its views slowly and the engineering of this chance is a long and hard job. Competition for the public’s attention is fierce.”

Referring to the “tracer theory” (rather than the “before and after theory”), Robinson focuses on being able to “trace the penetration of ideas in relation to exposure to media”. Today this is called attribution theory in relation to online marketing (drawing on Heider’s work from 1950s).

Another contemporary idea in the book chapter relates to the importance of being able to draw out the implications of data analysis. Robinson also applies his recommendations to “measuring employee attitudes” and “appraising community opinions”.

Naturally, it is in the practical sections of the chapter where the limitations of Robinson’s approach are revealed. However, the focus on understanding “attitudes, convictions, beliefs and prejudices of individuals” remains true if we are looking at human psychology to understand and drive behaviour.

The only research tools discussed in the chapter are “impressionistic observation” and “opinion sampling”. The first involves “a way to ‘size up a situation’ quickly, with a minimum of toil” but relies on “more skilful, more practiced observers”.

As Robinson explains: “the method of impressionistic observation, however, has grave short-comings. It is frequently erratic, producing grossly inaccurate pictures of public opinion. Impressions involve a large degree of subjectivity. When the observer is emotionally involved, he has trouble weighing his impressions correctly”.

In contrast, whilst stating that “opinion sampling is less flexible than impressionistic observation” and is more labour intensive, “it has a high content of objectivity” and provides outcomes that can be checked. “Like any other scientific process, a sampling operation can be repeated under parallel conditions and produce substantially identical results”.

The chapter goes onto consider the importance of “what to ask? Whom to question? How many to question?” to ensure “technical accuracy” in opinion surveys. Robinson also recommends understanding where people’s opinions are informed, and where they are misinformed. He writes:

“One of the major functions of research is to map out the important areas of ignorance.”

This often seems forgotten in modern studies where results may stated as unquestionable truth, rather than as a source of understanding. Another key research aspect is ensuring “the questioner and the respondent are talking about the same thing”, as there can be variance in “meanings and emotional connotations”. Bias and ambiguity can often be found in modern surveys – indeed, we commonly see research undertaken by PR practitioners solely to generate or drive a story rather than as a source of insight to inform practice or policy.

Interpretation of results is emphasised in the chapter with examples of poorly constructed surveys used to illustrate how “the naive observer could jump to the (wrong) conclusion”. Sampling issues are discussed with matters of representation and statistical validity raised. After presenting some case studies, Dr. Robinson concludes his chapter by discussing costs, arguing:


“A research budget must necessarily bear some relationship to what is spent on public relations, or the importance of the public’s attitude to the company’s welfare. If much public relations effort is being undertaken, then the questions should be, ‘Can we afford not to undertake research to guide us in this program?'”.

Evidence that Dr. Robinson was not alone in his expertise, he recommends additional sources published by his business partner, George Gallup – including the fabulously named: The Quintamensional Plan of Question Design, as well as Hadley Cantril and Blankenship’s “practical and informative guide.”


Editors’ Note.
The ongoing relevance of this chapter continues with the views of the Griswolds who discuss management impressions that public relations work is intangible and difficult to measure. They argue that the development of approaches to measure public attitudes “accurately and scientifically” were being used by many larger corporations (listed by name), and could be adapted to the requirements, and smaller budgets, of “medium-sized and even small corporations”. They conclude their summation with reference to the need to “follow the trend of public opinion an attitude as revealed in a wide variety of continuing studies”.


Addendum: 
Although today we have greater familiarity with surveys – including understanding their limitations – than in 1948, the deployment of research is arguably not routine in informing, and evaluating public relations activities. Cost and other constraints that were probably common in the 1940s remain those that I hear today from practitioners as reasons for relying on “impressionistic observation” as Robinson would term it. Our gut instincts are probably no more valid than those of our predecessors. Of course, we may also argue that research does not necessarily result in more effective plans and programmes, but at the least, it may help avoid costly mistakes, and reinforce a professional approach to our craft. Understanding and engaging with people will never be the most exact of sciences – but if we aren’t trying to underpin our work with an evidence base, we cannot be surprised when public relations itself is seen as little more than opinion, even when advocated by experts.


Read all the chapters in our series of posts from the 1948 book: Your Public Relations, via this link to contents list.

Image via: http://rogerwilkerson.tumblr.com

3 Replies to “How to use opinion surveys in public relations

  1. Heather, does it indicate anywhere the type of research (and for what sectors/industries) that Dr Claude E. Robinson and Opinion Research Corporation conducted for clients?

    And do you know if his Gallup partner is related to present-day Gallup polls and research?

    I agree with Toni–this book and the individual chapter authors, let alone the Mighty Griswald editors are fascinating. I also appreciate your 21st-Century perspective, including the historical PR horizon scanning.

  2. Toni – thank you for the comment.

    Regarding the first point, this is fascinating. I can honestly say that whether it is from my own direct experience, as well as indirectly working with others and practitioner-students in discussion, I’ve rarely come across robust testing before undertaking campaigns. It certainly makes sense to do this, but I imagine that as with most things evaluative in PR, the cry would normally be cost/time. The point you mention about payment for results was something I did come across with some historical research relating to the Institute of Public Relations in its early days.

    On the 2nd point, yes, I took Robinson’s thoughts as reflecting confirmation bias. The ability to listen openly is a skill that I think remains a weakness in PR practice, and indeed, academic work. Despite the focus on dialogic communications and relationship building, as well as more recent (re-)emphasis on evaluation, what I always think of as Covey’s mantra, seek first to understand, seems lacking.

    We do see listening competency within relationship building in practice, but it is not something that is really taught in practice or on qualifications. Indeed, we could undoubtedly make the case that many PR bad practices that lead to criticisms from clients/employers and media/other contacts can be traced to a failure to research and listen. Take the example of spamming releases – when have journalists ever asked for this? So why aren’t PR practitioners consistently listening, particularly when such criticisms can be readily found online?

    Glad you like the book serial. There is so much in the old books that I’ve been collecting. My latest is a book by Patrick Monaghan from 1972 called Public Relations Careers. Unfortunately it doesn’t actually have much on careers (I’d hoped it was a find for my PhD), and overall I find it a bit thin. But it is interesting in terms of the topic areas covered. For example, it starts with a section on Attitude, Skill and Knowledge – which the Global Alliance is of course researching currently. And, page 11 has a section on Racial and Ethnic Considerations, where it claims “by and large, black candidates, as an example, are not interviewed and handled by color-blind executives” – an issue raised a couple of weeks ago in this post: http://conversation.cipr.co.uk/2015/10/29/pr-prepared-name-blind-recruitment/

  3. Heather, once more this series is of invaluable richness to contemporary debate — Brava!

    I would like to comment on two points (but would love to discuss all…).

    1. When you write of pre-post research applied in public relations you say quote:

    ‘In my experience, this stage is one that is more common in advertising and marketing than public relations communications’ unquote. This is certainly true but only until -to my direct knowledge- the mid eigthies of the last century’ .

    With the early applications of Ogilvy’s ‘Orchestration’ and of Young and Rubicam’s ‘Whole Egg’ concepts… and with the rationalization and practical application in partnership with corporations such as American Express, Chemical Bank, San Paolo Invest and other less glamorous names, between 1985 and 1990 (si parva licet….) a team of senior professionals from my SCR Associati (subsequently sold to Shandwick in 1989) elaborated the ‘relationships governance’ model named GOREL.

    In these models the before-and-after effectiveness analysis became–at least for some–a common part of public relations practice.

    In the Gorel model, when stakeholder groups are well identified, the draft content is developed and the draft output program is elaborated with the intent to modify specific public behaviours as well as raise the value of an organization’s relationships, a representative sample of those groups is queried on two parallel issues:

    – quality of communication, via three indicators (from 1 to 10): credibility of the source, credibility of the content and familiarity of the content;
    – quality of the relationship, via four indicators (from 1 to 10): trust, commitment, satisfaction and control mutuality (this second part however only after 1997)

    The results of these two parallel investigations not only improve the quality of the content before, but also give the organization and its management an opportunity to set specific ‘numerical’ objectives to be verified not only in the after process, but in many (and best cases) also during the process.

    With many clients SCR made this a public contractual point leading to the Agency’s expulsion in 1991 from the Agency Association (Assorel), and my personal one from the professional association (Ferpi) because we knowingly violated the code of conduct that forbid consultants to be compensated on the basis of results. The code was subsequently changed in Lisbon in 1996.

    I assure you that I am not saying this to contradict you but only to set the stage for a review and updating on professional developments which happened in countries different from the UK or the USA.

    2. You also add quote:

    ‘ As Robinson explains: “the method of impressionistic observation, however, has grave short-comings. It is frequently erratic, producing grossly inaccurate pictures of public opinion. Impressions involve a large degree of subjectivity. When the observer is emotionally involved, he has trouble weighing his impressions correctly”.

    If I interpret correctly, it seems to me that Robinson intelligently portraits what we today call the confirmation bias of most research and listening. by organizations (searching only confirmation to one’s ideas or to what one wants to hear).

    From this perspective, I suggest that in the three full pages of posts that come up from PRC’s search engine dedicated to ‘organisational listening’, there are many references to the ‘third phase of the process’ …. included specifically to reduce the likelihood that the organization only listens to what it wants to hear.

    This phase refers to an adaptation from the individual patient to organizations of Italian psychiatrist Franco Basaglia’s (since deceased) common practice, as narrated to the global public relations community in 1976 in Trieste at the Global Alliance’s second world pr festival dedicated to communicating diversity, with diversity, in diversity, by the leading Italian philosopher Pier Aldo Rovatti.

    Here follows a description of the process that in my practical experience, has allowed many clients to avoid, at least in part,that confirmation bias:

    the organization

    -a- ‘moves out of itself’ and collects available as well as voluntary information from selected relevant stakeholder groups;

    -b- while remaining ‘out of itself’ the organization seeks and receives approval from its interlocutors or modifies accordingly all the collected data;

    -c- by returning ‘into itself’ the organization then interprets the collected data to understand how to better proceed with the relationship, yet (and this is very important) strictly in coherence with its own objectives.

    Please keep this series going.. Not only does it demonstrate that the past has much to do with the present and the future, but it gives many of us the opportunity to learn as well as to add to the body of knowledge.

Comments are closed.