101 words of advice – find out if you’re talking to yourself

“The biggest single problem in communications is the illusion that it has taken place.”  Read that last week and cheered.

The notion that the simple act of delivering a press release or conference speech means “communication” can be struck off the To Do list is as common as it’s deadly.  It’s why outputs (number of releases/ size of events) are often used to measure success when it’s outcomes (changing behaviour/ converting enquiries into sales) which matter.  I’ve written before about how difficult proper evaluation is, but without it you don’t know if you’re actually communicating or just talking to yourself.

Burn baby, burn

The bonfire of the qangos might not be such a popular rallying cry if the quangos themselves could  point to some hard evidence of their own achievement.  As David Cameron gets his matches ready, there’s a desperate need for NDPBs (and grant-funded voluntary sector bodies too) to be able to demonstrate that they represent value for money.  Sadly, in my experience, staff in bodies like this are happiest when they’re talking about the (undoubted) social need for their services and the benefits they were set up to deliver.  Mention of evaluation, demonstrating value for money, even – heaven forbid – the need to become self-supporting by selling commercial services, makes them come over giddy as a Victorian vicar accidentally catching sight of an uncovered table leg.  They should all be in a tearing hurry to get measures in place which demonstrate hard evidence of their usefulness.  If they can’t it’ll be hard to grieve too much when they start to smoulder.

Evaluation

Interesting piece in PR Week about the evaluation of PR campaigns and how long agencies can or should keep using the advertising value equivalent figure as a measure of success. The piece repeats all the reasons I’ve always mistrusted AVE as a measure – just colonising space in a paper for an article is no guarantee that anyone reads it, agrees with it or acts on it; it doesn’t offer a means of measuring social media comment; and for obvious reasons it can’t measure one of the key activities of a good PR – keeping bad stories out of the papers. How much might it have been worth to the BBC if the PR response to the Ross/Brand row had been niftier and those acres of press coverage about declining moral standards hadn’t been printed? How could you have measured it if it had happened?

Evaluation gets even harder when the campaign you are evaluating is trying to generate long-lasting behavioural/attitudinal change, as many of the campaigns run by government are. It takes years to achieve real social change – it’s taken decades for drink-driving to have become socially unacceptable, for example.  No client is going to pay for tracking research over a decade to prove whether or not they achieved their objective.  And no agency could wait that long to be paid. Who decides that social change has taken place?  As an agency, how do we demonstrate that the change was due to us and wouldn’t have happened anyway? Ultimately we’re forced back on easy to measure indicators: the delivery of materials on time/  budget, target take-up rates of info packs or testing kits among certain sections of the audience, an agreed level of media coverage measured through AVE or WOTS (weighted opportunities to see – which can generate their own meaninglessly surreal statistics, apparently there were 1.4billion WOTS for stories about bird flu in this country (pop 60m) during the last time we had a health scare).  On the occasions when I’ve been sitting on the client’s rather than the agency’s side of the process, I’ve always had my doubts that I’d be able to really measure the success of what I was being offered. COI were making a big noise about their new evaluation process, Artemis, a while ago – does it work?