…Or, so says this guy who’s never worked in radio before. Here’s his article. He is kinda right, but mostly wrong. First off, it is not Auditorium testing that is mis-guided, it is call-out research that done the dirty deed.
Auditorium testing is an accurate and reliable polling system if: 1) it is properly conducted and 2) if the sample size is large enough. Oh, and 3) If the radio management and programming team know what to do with the data after they get it. For part one, rarely is it done well. Radio research companies do shoddy work. The only one I ever worked with that was really good was/is Larry Rosin’s Edison Research. The second tripping point is sample size. That article says the typical sample is “several hundred people.” Nope. “Several hundred” is two hundred and few stations pay for more than that. Too expensive for tight-wad Corporate Radio; they’ll pay for a hundred person sample, tops. Often, less than that. So garbage in/garbage out is the majority position.
And for the third point; Do they know what to do with the data they get? Too often, the answer is no. Here’s an example. Say the station chooses to test 400 songs in a session. That’s the max that the respondents can stand in one of these sessions. The resulting data will show have the songs will test above average, half below average. The corporate honcho or the GM will say something like, “We ain’t playin’ those songs that tested below the line.” I’ve seen it happen. I’ve seen some of the most valuable records in the station’s library dropped from airplay because “we spent thirty grand on this project and we’re damn sure gonna follow what it tells us.”