Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
Shadorne:

Another way of making your point is this. Even if ABX tests do not reveal all audible differences, somehow, they do reveal *degrees* of difference. Components that ABX as different, and clearly so, are different to a *greater* degree than components that are indistinguishable under ABX conditions. Therefore, they are more deserving of audiophile evaluation. Likewise, ABX-distinguishable gear that is perceived as clearly better in DBT-ing, is more deserving of audiophile cash than gear that is not perceived as clearly better in DBT-ing.

This is independent of whether or not there is some perceiveable difference between components that are ABX indistinguishable. (Although I still can't understand how that could be.)

Yet, ABX opponents seem to ignore this more modest lesson. They reject ABX as a way of ultimately distinguishing components, and therefore decide it is unworthy as a reviewer tool at all, even in deciding where to drop their cash. Why?
Qualia, you say, "Why anyone, whether or not they think DBT is the *final* word, would ignore DBT as a way of determining where to spend their own money (speakers, room treatment first, then other stuff) is beyond me. " It is totally beyond me why anyone would have such distrust of what they hear to rely on DBT. If you wish say that I just choose to dump cash even when there are no differences. Basically, I find DBT invalid and have to otherwise proceed hoping that I can hear a side by side comparison of what I am interested in. On occasion I have been able to bring the desired components into my own home and do a comparison, some times I can rely on the ears of others I trust, one being a reviewer, one a distributor, and one two manufacturers, but most just audiophiles; and sometimes I just take a flyer, such as with the RealityCheck cdr burner. As I have repeatedly said, this is not a matter of rejecting science, it is a matter of rejecting a methodology as it obviously lacks face or conceptual validity. Also as with automobiles and wine, I do not base my buying decisions on double blind tests.
It is totally beyond me why anyone would have such distrust of what they hear to rely on DBT.

Apparently so. The explanation is simple: If you understand what scientists have learned about human hearing perception over the course of decades, then you will understand why we shouldn't always trust what we hear, and why in these cases listening blind is far more reliable than listening when you know what you're listening to. I suspect that you don't want to understand this, because it will upset the beliefs you've acquired over the years.

Now, there's nothing wrong with not knowing (or not accepting) this. After all, you don't have to understand the principles behind an internal combustion engine to buy a car. And if you can afford a multi-thousand-dollar audio system, it doesn't really matter. You'll probably get good sound regardless.

But if you can't afford that kind of an audio system, it can matter a lot.
I don't understand all the talk about "flawed methodology". If the methodology is flawed, make a suggestion as to how to improve it. In other words, for those who dispute the validity of DBT, please suggest a test that you would find convincing and yet would still control for the same factors (primarily listener bias) that DBT is designed to control for. Would you be convinced if a reviewer did a one month test of disputed component A in his or her own home, followed by one month with disputed component B? What about comparing a one month test of equipment with the reported price ranges and labeling reversed? What would it take to convince you?

Don't do what my friend did. He agreed to participate in a double-blind test that we discussed with him in advance. Only when when the test didn't show what he expected to find did he question the methodology. So agree on the methodology first, then live with the results.

I mean this seriously. A DBT won't change anyone's mind if the testers are not convinced in advance that the test will measure something. So please help to design an objective test that you AGREE IN ADVANCE will work.

If you believe that there is no such test, then you should question your own assumptions about the validity of the scientific method in general.
Yes, Qualia, we are asking the same question. It's the same question that subjectivists have been asked for years, and they don't have an answer, so they have to stoop to insulting people's intellectual integrity, as Gregadd has just done yet again. Why do they bother?