Leading Questions

These are my own views, and do not represent those of my employer. Now we’ve got that out of the way, we’ll continue.

It’s fairly understood that depending on how you ask a research question, you can get different answers. In research terminology, questions that can elicit a particular response are called “leading questions.”

You see the same things in legal dramas all the time: “Objection Your Honour! The prosecutor is leading the witness!”

SurveyMonkey published a great blog on the problem last year with some good examples of leading questions:

“How short was Napolean?” rather than “How would you describe Napolean’s height?”

“Should concerned parents use infant car seats?” rather than “Do you think special car seats should be required for infant passengers?”

Sometimes leading questions can appear by accident, or through poor phraseology by whoever’s asking the questions. But other times, it’s deliberate. Perhaps a particular answer is being sought, and the research just has to back up a pre-determined view.

This week saw the publication of the White Paper on the future of the BBC under a new Charter.

Now I don’t propose to write too much about that here – it could all get a little heated and contentious. Buy me a pint or get me a cuppa if you want to know my views. But I do want to highlight some of the research that the Department of Culture, Media and Sport (DCMS) published alongside the White Paper.

While research is always useful for a major piece of Government legislation, and indeed a research-based approach to legislation would be welcome, it was curious that the research that was commissioned was conducted in the first quarter of 2016 after nearly 193,000 consultation responses, 9,000 Radio Times responses (once the DCMS asked for the password), more than 300 industry experts and organisations had been consulted, and 9 industry round tabels had taken place.

Charter Review Timeline

But nobody can complain about additional research can they?

Well, up to a point Lord Copper.

Take this example question:

Local regional national

(The colours from left to right represent: Completely agree, Agree strongly, Agree slightly, Neither agree nor disagree, Disagree slightly, Disagree strongly, Completely disagree, Don’t know)

The question implicitly infers that because the BBC has radio stations, then commercial radio stations will not be able to get an audience. It’s binary. You either listen to the BBC or commercial radio. You can’t possibly listen to both.

That’s not an egregiously bad question, but it’s certainly poorly framed.

There are questions asked where it’s frankly impossible for a member of the public to fully know the answer. For example, is the BBC spending licence fee money efficiently?

Unless you work within the media sector, you probably don’t actually have much knowledge of this. Indeed, even within the BBC, you might need to be in finance to have a true picture.

You may have a perception of how efficient the BBC is with its money, but that might be tainted by anti-BBC press reports for example. Perception is important of course, but we need to be clear that’s what we’re measuring.

If your view on efficiency is based on disliking how much prize money is awarded on Pointless (a relatively trivial part of a single programme’s budget compared with studio and staff hire, etc), then you’re not really answering the question properly.

Distinctiveness is a key word in this charter. Some variant of this word is used 155 times in the White Paper by my count.

But how do you determine distinctiveness? Seemingly, you just ask.

Here are a pair of questions again about radio:

Radio Distinctiveness

And again they’re very leading. The questions infer the answer in the way they’re asked. The first questions seems to saying, “Radio 1 and Capital/Absolute are the same aren’t they?”

In spite of that, most people don’t actually know, because most people don’t listen to Radio 1. Now 10m people a week do listen, but 43m people don’t. The question wasn’t just asked of Radio 1 listeners, or Radio 1 listeners who also listen to Absolute Radio or the Capital Network. That would be a sensible thing to ask, since those people would actually be able to discern the differences. So the answers again come down to perception, perhaps based on no knowledge of the stations at all. And when does perception become prejudice?

The same of course applies to the Radio 2 question, comparing it with Heart and Magic. It’s asked of everyone regardless of their listening habits.

And of course all of this is before the reality of the differences between the services – a quick look at CompareMyRadio can help here on the simplest level – the range and overlap of music played by different services.

Now to be fair, I think most of the questions in this questionnaire were actually fine, and the results are pretty consistent with other responses. But when you see a few questions like that sticking out, it does make you ask deeper questions about the whole process. And when those findings are then used to frame key parts of the White Paper, you only question the process even more.


Posted

in

Tags: