Survey one. What fun.
For those who work in a metropolitan station the first survey of each year can be a stressful time. If you get a good result it carries forward into survey two and can provide the foundation for a good year. If you get a rough one, it is nearly impossible to recover until survey three: which can make for a very uncomfortable first half of the year, given that result is released in May.
Survey one is impossible to predict. And anyone with experience will not even try. There are a number of reasons why it’s so volatile.
The first challenge is that the first survey began in mid-January when many listeners were still on holidays and out of normal workday listening patterns. A significant number of listeners return to work after the Australia Day long weekend. This means that the composition of the audience can differ from other surveys during the year. This is not to say the results are impaired – the survey results are still a true representation of the listening during this period, however it is a result that requires analysis with consideration given to the seasonal influence. Similarly, the football season in Melbourne particularly can provide a different listening landscape.
The second challenge is the duration of the first survey. It’s short. Surveys 2-8 are compiled from a 10-week sample. Survey one however is compiled from a four-week sample. To ensure the numbers are robust, Nielsen doubles the sample, an occurrence that only happens once per year in this survey.
The third difficulty with predicting Survey one is that it is completely stand-alone. Every other survey carries in the wave from the previous result. Survey eight however, does not wave into Survey one so there is no real forewarning of the result aside from internal tracking, and even that has its seasonal vagaries.
Finally, most stations will have made minor, or major, adjustments to their lineups and execution of their product across November and December so it is essentially the first scorecard for the ‘new and improved’ station. Internally everyone wants to see the ‘great listener reaction to the exciting changes’ and this can provide an unhelpful level of expectation around this one result.
The reality is that one survey alone, irrespective of when it is, is never a realistic indicator of the success or otherwise of a station’s strategy.
Any analysis of station performance can only be effective when looking at rolling trends across multiple survey periods, rather than any one individual result. Unfortunately rolling trends don’t make good headlines, so we end up with shallow, and often useless, analysis of survey results in the public arena.
Nonetheless, for those who will be receiving the first report card of the year this Thursday, good luck!
If the result goes your way, then you will have had a fortunate day. If not, dust yourself off and push on. As long as your station’s strategic position is clearly defined to the listener, robust in it’s foundation, and you are executing at a high level, the more accurate conclusion to draw is that the ‘lottery’ did not fall your way this time.
Dan Bradley is Executive Director of Kaizen Media; an international media, management and marketing company.
You can contact Dan here.