Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
This Rouvin-Pableson exchange is fascinating. I agree with Pableson on just about everything. Perhaps that is because I'm an academic (I'm a philosopher, but I'm also part of the cognitive science faculty b/c of my courses on color and epistemology). Anyway, I'm no psychologist, but I am aware of the powerful external forces shaping perceptual evaluation. So I am especially leery of those extra-acoustical mechanisms, which are, by their very nature hidden from us.

SOME RELEVANT PSYCHOLGOICAL MECHANISMS TO BEAR IN MIND.

To start with, there's the endowment effect. The experiment takes place at a three-day conference. At the beginning of the conference, everyone is given a mug. At the end of the conference, the organizers offer to buy the mugs back for a certain price. Turns out, people want something like $8 (can't remember the exact number) to give their mug back. But other groups at different conferences are not given the mug; it is sold to them. Turns out, the price they are willing to *pay* for the mug is, like $1. Conclusion: people very quickly come to think the things they have are worth more than things they don't have, but could acquire.

This may seem to run counter to our constant desire to swap out and upgrade in search of perfect sound, but it explains the superlatives that people use -- "best system I've ever heard," "sounds better than most systems costing triple"-- when describing mediocre systems they happen to own. (Other explanations for this are also possible, of course.)

Our audiophiliac tendencies are also in part explained by the "choice" phenomenon: when you are faced with a wide variety of options, you're not as happy with any of them as you otherwise would be. When subjects are offered three kinds of chocolate on a platter, they're pretty happy with their choice. But when they're offered twenty kinds, they're less happy even when they pick the identical chocolate. That's us!

Another endowment-like effect, though, and this is what got me to write this post, is one that happens after making a purchasing or hiring decision. After making the decision say, to hire person A over person B, a committee will rate person A *much* higher than prior to the hiring decision, when person B was still an option. In other words, we affirm our choices after making them.

This phenomenon is more pronounced the more sacrifices you make in the course of the decision-making process. In other words, if you went all out to get candidate A, you'll think he's even better. Women know this intuitively. It's called playing hard to get.

In the audio realm, when you spend a couple grand on cables, your listening-evaluation mechanisms will *make* the sound better, because you have sacrificed for it.

So *this* made me wonder whether really expensive cables *do* sound better, to those who know what they cost and who made the sacrifice of buying them. If so, then those cables are worth every penny to those who value that listening experience. DBT cannot measure this difference, because it's not a physical difference in the sound. But it is still a *real* difference in the perceptual experiences of the listener. In the one case (expensive cables), your perceptual system is all primed and ready to hear clarity, depth, soundstage, air, presence, and so on. In the other case (cheap cables), you perceptual system is primed to hear grain, edge, sibilance, and so on. And hear them you do!

Best of all would be forgeries, *faked* expensive cables your wife could buy, knowing they were fakes, and stashing the unspent thousands in a bank account. You'd get to "hear" all of this wonderful detail, thinking you were broke, but years later, you'd have a couple hundred grand in your retirement fund!

Sorry for the rambling post, but I am interested to hear what Pableson has to say. You are missing out, Pableson. Knowing about the extra-acoustical mechanisms, you cannot "hear" the benefits of expensive cables. It's all ruined for you, as if you discovered your "wonderful" antidepressants were just pricey sugar pills.
Double blind testing is the ONLY way to test something fairly to remove human preconception, expectation, and visual prejudice. That is why it is used for drug trials, and that is why it should be used for hifi.

Any audiophile who questions whether DBT can produce the most accurate results within the other constraints (time/partnering equipment) of a shootout is not helping advance audio. But then I think most of us here would secretly agree that audio is a hobby with more than its share of snake-oil salesmen.
If you find this fascinating, Qualia8, then maybe you're the one who should be taking these sugar pills.

Obviously I agree with you, since you agree with me. There's a lot of expectation bias (aka, placebo effect) and confirmation bias (looking for--and finding--evidence to support your prior beliefs) in hearing perception. But I suspect some high-enders would rather sacrifice the retirement fund than admit that they might be subject to these mechanisms.

To your last point, it is NOT all ruined for me. I can spend my time auditioning speakers, trying to optimize the sound in my room, and seeking out recordings that really capture the ambience of the original venue.
One question: let's say we get Double-blind testing, would the associated components also be tested blind? ...
So let's see: say we are testing speakers: we should double-blind what two different amplifiers, tube and solid state; two different levels of power for the amplifiers? ... should we double-blind for the room as well ... I think folks are naive about how many variables are at stake in trying to make audio reviewing and hearing more "precise" and "scientific" than it ever could be.
But le't suppose we did all this: I submit that people still would question the integrity of reviewers because people would still disagree on the quality of the sound they hear. And some among us would swear that reviewer X was on the take.
Pabelson and wattsboss, I agree with both of you as my first posting would suggest. I am getting on with my search for a better speaker than the twenty or so that I have tried thus far, and I cannot imagine how DBTesting would help me at all in this quest.

In science we are interested in testing hypotheses to move along human understanding. In engineering we are seeking to apply what is known, limited though it may be. Audio is an engineering problem and there is no one right way to come up with the best speaker. When validly applied experiments using blinds are useful for excluding alternative hypotheses. This is not a science, however.

Also, while I read reviews, it is usually of those whose opinions I have learned to value because my replications of their work has reached the same conclusions. I fully realize that their testing is sharply restricted by the limited time and setups they have. If my testing yields results I like, whether or not I am delusional, I buy and am happy. I suspect that others would share my conclusions, but it is not a big deal if they do not.